Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I've read this forums for a long time and I've found them really useful.
Now I've got a problem and I would love you to help me. Thanks in advance!
I got a PC at home working as a server with Ubuntu server 7.10.
It has 5 hard disks:
1 PATA/IDE disk (system + swap)
4 SATA disks (software raid: array of 4 disks RAID 5)
Turning on the server, you can read:
Code:
fsck.ext3: Unable to resolve 'UUID=2ec026e5-1ed9-4007-9d4f-82fbfafb2d9f'
fsck died with exit status 8
* File system check failed
Please repair the file system manually
This is what "fdisk -l" shows:
Code:
Disk /dev/hda: 120.0 GB, 120000000000 bytes
255 heads, 63 sectors/track, 14589 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xdc08dc08
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13991 112382676 83 Linux
/dev/hda2 13992 14589 4803435 5 Extended
/dev/hda5 13992 14589 4803403+ 82 Linux swap / Solaris
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00003b91
Device Boot Start End Blocks Id System
/dev/sda1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0005978d
Device Boot Start End Blocks Id System
/dev/sdb1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000077ee
Device Boot Start End Blocks Id System
/dev/sdc1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdd: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00098808
Device Boot Start End Blocks Id System
/dev/sdd1 1 60801 488384001 fd Linux raid autodetect
I've tried to change this line in fstab:
UUID=2ec026e5-1ed9-4007-9d4f-82fbfafb2d9f /media/raid ext3 defaults 0 2
to:
/dev/md0 /media/raid ext3 defaults 0 2
Then "mount -a" or rebooting:
Code:
mount: special device /dev/disk/by-uuid/2ec026e5-1ed9-4007-9d4f-82fbfafb2d9f does not exist
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
(could this be the IDE device where you in fact use
ide-scsi so that sr0 or sda or so is needed?)
In some cases useful info is found in syslog - try
dmesg | tail or so
Could it be a Raid fail?
Do I need to create the array again?
Any kind of help will be really appreciated.
Thank you very much.
At first glance it doesn't look like it's a RAID problem. Have you done any system changes, upgrades, ...?
Have a look in: /dev/disk/by-uuid and see if you can find 2ec026e5-1ed9-4007-9d4f-82fbfafb2d9f and look to see if /dev/md0 or /dev/md/0 still exist.
also try the following two command just in case:
Code:
# cat /proc/mdstat
# mdadm --detail /dev/md0
I suspect the system got rid of the UUID or that it changed. Maybe all you need is to add a new UUID to your RAID config file but get more info before doing any changes. You can really make things worst if you are not very careful.
I also noticed your '/' was commented out in your fstab!
At first glance it doesn't look like it's a RAID problem. Have you done any system changes, upgrades, ...?
No changes.
Quote:
Originally Posted by eco
Hi,
Have a look in: /dev/disk/by-uuid and see if you can find 2ec026e5-1ed9-4007-9d4f-82fbfafb2d9f and look to see if /dev/md0 or /dev/md/0 still exist.
Can't find 2ec026e5-1ed9-4007-9d4f-82fbfafb2d9f. /dev/md0 does exist.
This gives an error:
ln -s 2ec026e5-1ed9-4007-9d4f-82fbfafb2d9f /dev/md0
It's like this, isn't it?
ln -s /dev/md0 2ec026e5-1ed9-4007-9d4f-82fbfafb2d9f
mount -a:
Code:
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
(could this be the IDE device where you in fact use
ide-scsi so that sr0 or sda or so is needed?)
In some cases useful info is found in syslog - try
dmesg | tail or so
If you have the space I'd backup all RAID disks using dd and then try and force a rebuild of the raid but seriously, back it up first before you do any changes.
Did you not have the data backup up before the 'failure'?
dd is slow but it's the best for making sure you have an exact copy of your disks in case of failure. Be sure to know which disk to restore to in the RAID.
I still think the problem is with the system and not the RAID software. Are you sure no changes where made? Was the box ever rebooted since the RAID was built? Does dmesg say anything more?
A long process might be to recreate an exact RAID and then dump the data back onto each disk and hope for the best.
I hope that the 4 disks in the array are ok, and I can restore the info.
Should I make a backup of the system disk too?
If I do, I think it should be unmounted, right?
I could use a liveCD.
By the way I haven't unmounted aything to make backups of the disks of the array, is it ok?
Well, the disks of the array where never started as RAID disks so I can't see this as being a problem. For the system disk you are right, boot of a live CD and make the image. What I tend to do for testing is create a vm in say VirtualBox with a disk of the same size of the OS, dd back to the VM and boot the VM to see if it all works and make the changes there.
Also make a copy of all the info regarding your RAID as you will want it to be identical and make sure you don't write to the RAID disks when you recreate it.
Finally, after making backups of every disk, I installed Debian instead of Ubuntu.
During the installation, I built the raid array again because it looked damaged (raid 5 of 2 disks. It should be 4 disks).
After installing debian everything is fine. rsnapshot is already making backups every 4 hours, day, week, etc. to avoid future problems.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.