LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 08-03-2009, 11:04 AM   #1
ericthefish
LQ Newbie
 
Registered: Aug 2007
Distribution: Ubuntu 7.10 / 8.04
Posts: 19

Rep: Reputation: 0
Unhappy booting / lvm / failing hard drive issues (ubuntu server)


hi all,

i seem to have a complex issue on my hands that i can't figure out. first, i wasn't able to get access to my LVM'd hard drives via SSH without half of the content being missing or locked out (read-only or unable to open at all). the problem then went up to being unable to boot up - all i got was a long slew of [100.1234] ata1.00: error messages (they eventually DO stop once they get to around 998.234234 or whatever, and then i can log in and it seems to be "stable").

part way through this rebooting ordeal i realized / decided to make power-on-self-test stop and wait if it finds errors (via bios) - that showed me that my 750G SATA drive is "bad" and my two (older) 120G & 250G IDE drives are "ok". i then unplugged the SATA drive, rebooted, and didn't see those error messages. however, my LVM was still obviously not coming together (since LVM was made up of the 120G, 250G, and 750G drives)

so, now it seems:

a) i can't boot smoothly with all drives in place
b) i can't do anything with lvm if i remove the 750G sata
c) with all drives hooked up, i can wait 10mins for the errors to go away, but STILL can't do anything with lvm
d) i need to recover the lvm stuff so that i can nicely migrate ALL those files (1.1TB of it) onto a new 1.5TB drive and do away with lvm

this is all running ubuntu 7.10 server edition (32-bit version on a 64-bit computer). i DID post a question on the ubuntu forum as well but so far got no replies at all.

oh, here's a few pics of the booting process, with one or two "fail" lines showing.
Attached Thumbnails
Click image for larger version

Name:	P1000255.JPG
Views:	39
Size:	170.3 KB
ID:	1158   Click image for larger version

Name:	P1000257.JPG
Views:	34
Size:	195.4 KB
ID:	1159   Click image for larger version

Name:	P1000258.JPG
Views:	36
Size:	178.3 KB
ID:	1160  
 
Old 08-03-2009, 11:36 AM   #2
allanf
Member
 
Registered: Sep 2008
Location: MN
Distribution: Gentoo, Fedora, Suse, Slackware, Debian, CentOS
Posts: 100
Blog Entries: 1

Rep: Reputation: 20
The booting file system checks only check the partition defined filesystems. I would suggest booting a live Knoppix CD/DVD as it has LVM support. Activating the LVM and then run "fschk" on the "logical volume"s.

To activate the LVM:
Code:
pvscan
lvscan
lvchange -a y   your_volume_group_name_goes_here
lvscan
The devices that need to be scanned are similar to:
/dev/mapper/your_volume_group_name_goes_here-your_logical_volumne_goes_here

While the lvm is working, the easiest way to migrate to a new drive is:
1) partition the new drive create the LVM partition (also any non-LVM partitions that reside on drives you are planning on removing!)
2) copy data from the non-LVM partitions to the new partitions.
3) LVM migration ...
Code:
# add the new LVM partition(s) via:
vgextend your_volume_group_name_goes_here your_new_LVM_partition_name_goes_here

# migrate the data from the LVM_partition(s) to be removed via:
pvmove your_old_LVM_partition_name_being removed_goes_here

# remove the old LVM partition(s) from the volume group via:
vgreduce your_volume_group_name_goes_here your_old_LVM_partition_name_being removed_goes_here
 
Old 08-03-2009, 01:19 PM   #3
ericthefish
LQ Newbie
 
Registered: Aug 2007
Distribution: Ubuntu 7.10 / 8.04
Posts: 19

Original Poster
Rep: Reputation: 0
for migrating to a new drive, i do NOT want the new drive to be part of a LVM in the future. i plan on just going individual hard drives and mounting each separately - more of a pain to manage where i store stuff, but more reliable i think in case i have to move drives in / out of the computer.

so, having said that, can i just format the new 1.5TB drive as a single giant xfs partition and dump stuff onto it from the (hopefully recovered) lvm drives? i actually have two new available 1.5TB drives, so i can spread my recovered files over them.

i'll have to download the Knoppix CD in that case (as i don't have any). is the CD or the DVD the better way to go?

Last edited by ericthefish; 08-03-2009 at 01:21 PM.
 
Old 08-03-2009, 11:19 PM   #4
allanf
Member
 
Registered: Sep 2008
Location: MN
Distribution: Gentoo, Fedora, Suse, Slackware, Debian, CentOS
Posts: 100
Blog Entries: 1

Rep: Reputation: 20
Quote:
Originally Posted by ericthefish View Post
for migrating to a new drive, i do NOT want the new drive to be part of a LVM in the future. i plan on just going individual hard drives and mounting each separately - more of a pain to manage where i store stuff, but more reliable i think in case i have to move drives in / out of the computer.

so, having said that, can i just format the new 1.5TB drive as a single giant xfs partition and dump stuff onto it from the (hopefully recovered) lvm drives? i actually have two new available 1.5TB drives, so i can spread my recovered files over them.

i'll have to download the Knoppix CD in that case (as i don't have any). is the CD or the DVD the better way to go?
I use the CD but either should work. I even used the Knoppix CD to install my Gentoo on my laptop. I use LVM even on my notebook as it allows moving to a larger drive as prices drop via the USB or firewire external (only have to dd the "/boot" partition via dd. Way back with 3.5 in floppies, I used "Tom's Boot-Root Disk" for emergency repairs but moved to Knoppix as my emergency disk. Fedora 10 liveCD also includes LVM but I also use reiserfs (which it does not have).
 
Old 08-03-2009, 11:25 PM   #5
allanf
Member
 
Registered: Sep 2008
Location: MN
Distribution: Gentoo, Fedora, Suse, Slackware, Debian, CentOS
Posts: 100
Blog Entries: 1

Rep: Reputation: 20
If you have the "new drive" in place when Knoppix is running and the fschk corrects the problem you could use it to copy the information. Knoppix should not be used to copy drive contents to a USB drive (even when using the boot option to correct the problem) as the copy of 320Gb will take well over 14 hours.
 
Old 08-04-2009, 09:37 PM   #6
ericthefish
LQ Newbie
 
Registered: Aug 2007
Distribution: Ubuntu 7.10 / 8.04
Posts: 19

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by allanf View Post
If you have the "new drive" in place when Knoppix is running and the fschk corrects the problem you could use it to copy the information. Knoppix should not be used to copy drive contents to a USB drive (even when using the boot option to correct the problem) as the copy of 320Gb will take well over 14 hours.
so i should physically install both of my 1.5TB drives before booting up the live cd of knoppix? hmm, good idea. then if it CAN recover stuff, i can automatically transfer to the two drives. uhh, assuming i can get root access to do that!!!

hmm, could i format the new drives using xfs (file system) and copy using the live cd? or would i be limited to fat32 (which i think doesn't have file permissions)?
 
Old 08-04-2009, 11:40 PM   #7
allanf
Member
 
Registered: Sep 2008
Location: MN
Distribution: Gentoo, Fedora, Suse, Slackware, Debian, CentOS
Posts: 100
Blog Entries: 1

Rep: Reputation: 20
Knoppix has "termial as root" or "root terminal"

On the KDE panel is a Penquin wich will allow you to open a terminal as root. I can not remember if it has the "xfs" present or not. Like I said, I tend to use ext3 or ext3 for /boot and reiserfs for almost everything else.

Yes, if it has the the "filesystem" that you desire, and the fschk fixes the "logical volumes" then the coping could be done. Don't forget to install grub (and have it configured on the new drive if planing to boot from it (of course "/etc/fstab" and "grub" on the new drive needs to edited (or configured) to work in the location (/dev/sdaX, /dev/sdbX, etc) for the configuration when you are going to boot the machine.
 
Old 08-05-2009, 09:27 PM   #8
ericthefish
LQ Newbie
 
Registered: Aug 2007
Distribution: Ubuntu 7.10 / 8.04
Posts: 19

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by allanf View Post
The booting file system checks only check the partition defined filesystems. I would suggest booting a live Knoppix CD/DVD as it has LVM support. Activating the LVM and then run "fschk" on the "logical volume"s.

To activate the LVM:
Code:
pvscan
lvscan
lvchange -a y   your_volume_group_name_goes_here
lvscan
The devices that need to be scanned are similar to:
/dev/mapper/your_volume_group_name_goes_here-your_logical_volumne_goes_here

ok, i'm stuck here. help? i did the pvscan which located the 3 hard drives and the labels (yay!), then the lvscan showed "ACTIVE '/dev/satavolumegrop/satanas' (1.02TB) inherit"

HOWEVER, the first line after each of those commands was:

/dev/dm-0: read failed after 0 of 4096 at 0: input/output error

the lvchange command only showed that same "read failed" line, and a fsck command seems to think it's a zero-length partition.

see attachment for screen-shot.

ehhhh... help? was i making progress?
Attached Thumbnails
Click image for larger version

Name:	P1000268c.JPG
Views:	22
Size:	251.7 KB
ID:	1170  
 
Old 08-05-2009, 11:42 PM   #9
allanf
Member
 
Registered: Sep 2008
Location: MN
Distribution: Gentoo, Fedora, Suse, Slackware, Debian, CentOS
Posts: 100
Blog Entries: 1

Rep: Reputation: 20
Code:
# causes the system to scan for the LVM partitons
pvscan

# show the state of the "Logical volumes" Active or InActive
lvscan

# change a "Logical volume" to state (y == Active and n == Inactive)
lvchange  -a y   your_volume_group_name_goes_here

# re-show the state of the "logical volumes"
lvscan
For example:
Code:
bash: lvscan
  ACTIVE            '/dev/r60_int_090331/opt' [12.00 GB] inherit
  ACTIVE            '/dev/r60_int_090331/portage' [11.00 GB] inherit
bash: fschk -t reiserfs /dev/r60_int_090331/portage
....
bash: fschk -t ext4 /dev/r60_int_090331/opt
....
bash:
 
Old 08-06-2009, 06:45 AM   #10
ericthefish
LQ Newbie
 
Registered: Aug 2007
Distribution: Ubuntu 7.10 / 8.04
Posts: 19

Original Poster
Rep: Reputation: 0
that is what i did, except i didn't specify the filesystem type. since i can't remember what filesystem i used on the individual drives when i first formatted them, i tried it just now 4 different ways:


Code:
sudo fsck -t ext2 /dev/sata_volume_group/sata_nas
Code:
sudo fsck -t ext3 /dev/sata_volume_group/sata_nas
both of those returned a "Attempt to read block from filesystem resulted in short read while trying to open /dev/whatever. Could this be a zero-length partition?"



Code:
sudo fsck -t jfs /dev/sata_volume_group/sata_nas
this one got a bit more detailed, but maybe not. it said:

using default parameter: -p
current device is: /dev/whatever
the superblock does not describe a correct jfs file system
if /dev/... is valid and contains a jfs file system, then both the primary and secondary superblocks are corrupt and cannot be repaired, and fsck cannot continue
otherwise make sure the entered device /dev/... is correct




Code:
sudo fsck -t lvm2 /dev/sata_volume_group/sata_nas
just came back with:

fsck: fsck.lvm2: not found
((i'm guessing it just doesn't have such a module, or it's not a valid way of doing things))
fsck: Error 2 while executing fsck.lvm2 for /dev/...



i'm still somewhat concerned about the error that i get with either pvscan or lvscan:

/dev/dm-0: read failed after 0 of 4096 at 0: input/output error



.

Last edited by ericthefish; 08-06-2009 at 06:51 AM.
 
Old 08-06-2009, 08:12 AM   #11
allanf
Member
 
Registered: Sep 2008
Location: MN
Distribution: Gentoo, Fedora, Suse, Slackware, Debian, CentOS
Posts: 100
Blog Entries: 1

Rep: Reputation: 20
[qoute]
fsck: fsck.lvm2: not found ((i'm guessing it just doesn't have such a module, or it's not a valid way of doing things))
fsck: Error 2 while executing fsck.lvm2 for /dev/...
[/quote]

Note the filesystem is not "lvm" nor "lvm2". The "Logical Volume Management" is not a filesystem. It is a concept that the mainframe computers used way back in the time of drum storage due to the small sizes to allow them to be grouped into what looked like a larger physical device (the terminology is from this time). These physical devices (are the actual partitions that in LVM are marked as type "8e"). In Linux the command "pvscan" causes the system to scan the attached "storage devices" (includes attached USB and firewire storage devices as well) looking for the LVM marked partitions and inserts new partitions into a table that includes the "volume group" that the partition is a member. The "volume group" can be thought of as a "logical drive" as within the storage group is the creation of one or more "logical volumes" (think of these as "logical partitions"). Each "logical volume" has a filesystem applied to it (either when created by the install program from your input or be the "root" user via the "mkfs" command).

Therefore when doing the "fschk" the command needs to be told the type of filesystem that was applied to the "logical volume".

In some cases the filesystem, "auto", can be used within the "fschk" command:
Code:
fsck -t auto /dev/sata_volume_group/sata_nas
But it is best to know the filfesystem being used as "auto" does not always know all the filesystems that have been activated.
 
Old 08-06-2009, 03:09 PM   #12
ericthefish
LQ Newbie
 
Registered: Aug 2007
Distribution: Ubuntu 7.10 / 8.04
Posts: 19

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by allanf View Post
[qoute]

Therefore when doing the "fschk" the command needs to be told the type of filesystem that was applied to the "logical volume".

In some cases the filesystem, "auto", can be used within the "fschk" command:
Code:
fsck -t auto /dev/sata_volume_group/sata_nas
But it is best to know the filfesystem being used as "auto" does not always know all the filesystems that have been activated.

so if i tried it specifically with ext2, ext3, and jsf - and each one came back with an error - then what does that tell me about the structure? i'm 99% sure i didn't use xfs (but maybe i'll try it when i get home), and i never touch FAT or Reiser... i'm actually thinking that the drives were formatted jfs, but is there a way to check that? and if they WERE jfs, and fsck is saying it can't get past the hosed blocks, what alternatives do i have?

i know for a fact that recently, i was able to navigate the files on there for the most part. the directory structure seemed to be ok, but some directories had "lost" their file contents and were empty. it was perfect a few months ago, and now it seems to be totally hosed.

(some background - this is on a file server, which takes the three LVM'd drives and NFS's them so that from my desktop computer i mount the NFS'd "drive" and get access to the data on the server, using the command "sudo mount server1:/sata_whatver /media/server_storage")
 
Old 08-06-2009, 09:32 PM   #13
ericthefish
LQ Newbie
 
Registered: Aug 2007
Distribution: Ubuntu 7.10 / 8.04
Posts: 19

Original Poster
Rep: Reputation: 0
ok, so just for the fun of it i also did

Code:
sudo xfs_check /dev/sata....
and got the following error:

xfs_check: /dev/sata... is invalid (cannot read first 512 bytes)

isn't that where the partition tables are written???

to see more, i did

Code:
sudo xfs_repair -n /dev/sata...
and got:

Phase 1 - find and verify superblock
superblock read failed, offset 0, size 524288, ag 0, rval -1
fatal error -- input/output error



thoughts? it's entirely possible that i did use xfs when i first formatted those three drives (i have no notes of what i did, oops).


.
 
Old 08-07-2009, 01:10 AM   #14
allanf
Member
 
Registered: Sep 2008
Location: MN
Distribution: Gentoo, Fedora, Suse, Slackware, Debian, CentOS
Posts: 100
Blog Entries: 1

Rep: Reputation: 20
You have partition 1 of sda, hdc, and hdd marked as LVM partions.

You have placed all of these into the volume group "sata_volume_group". You can think of this as a pretend drive called "sata_volume_group"

You have created a "logical volume" called "sata_nas". You can think of this as a partition on the pretend drive called "sata_volume_group".

When you ran "pvscan" it looks like you have some "raid" controller (or an LVM mirror) due to the "/dev/dm-0: read failed after 0 of 4096 at 0: input/output error"

It looks like Knoppix already activated the LVM "logical volume" due to the "ACTIVE '/dev/stat_volume_group/sata_nas'" message from the "lvscan". This would mean that the "lvchange" is not needed to "actvate" it.



Can you mount your normal "/" while in Knoppix?
Code:
mount -t ??? /dev/??? /mnt
Then look at the file "/mnt/etc/fstab" to see the type. The basic information is line basded and contains:
Code:
device_reference  mount_point_path  file_system_to_use  other_options_and_stuff
Then you can see how you were mounting it.
 
Old 08-07-2009, 08:30 PM   #15
ericthefish
LQ Newbie
 
Registered: Aug 2007
Distribution: Ubuntu 7.10 / 8.04
Posts: 19

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by allanf View Post

Can you mount your normal "/" while in Knoppix?
Code:
mount -t ??? /dev/??? /mnt
Then look at the file "/mnt/etc/fstab" to see the type. The basic information is line basded and contains:
Code:
device_reference  mount_point_path  file_system_to_use  other_options_and_stuff
Then you can see how you were mounting it.
haha! wow, i never even thought of doing something as simple as that. turns out it was all ext3 for the /dev/satawhatever

this should make it easier to troubleshoot things, no? i mean, ext3 is a fairly well-documented filesystem.

still, the
Code:
sudo fsck -t ext3 /dev/sata...
command fails miserably, spitting out fsck: fsck.lvm2pv: not found and fsck: Error 2 while executing fsck.lvm2pv for /dev/hdc1

(same for hdd1 and sda1, which are the 3 hard drives i was using for LVM)

i did some googling and apparently that's common, since it's not a good command to you (fsck on an lvm).

so i dunno what to do next. can i safely follow a guide to "create" an lvm and not risk losing the data that's on there?

is there any way at all to see the files on each individual drive if i "deactivate" the lvm somehow?
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Hard drive failing! Please help! mahdif Linux - Hardware 5 02-26-2009 09:57 AM
Possible failing hard drive? ub3rj3phf *BSD 4 11-04-2008 12:27 PM
Hard Drive Failing? keysorsoze Linux - Hardware 24 12-09-2006 12:53 PM
hard drive failing, how can I create an image of my drive? oily_rags SUSE / openSUSE 6 07-07-2005 02:19 PM
Is my hard drive failing? HGeneAnthony General 1 11-23-2004 01:37 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 11:20 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration