Linux - Server This forum is for the discussion of Linux Software used in a server related context. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
04-24-2011, 07:45 PM
|
#1
|
LQ Newbie
Registered: Apr 2011
Posts: 2
Rep:
|
mdadm: Rebuild a software raid from drives with existing partitions
Hi@all,
I've got questions about a software raid. Its a JBOD (linear).
Its from a Synology Box with 3 disks, which one is damaged. But this disk wasnt in use. (Take a look on the raid-size of 493 GB - and the both available disks with 250GB..)
On the others there were a linear raid. during this damaged disk the synology-device tells me, that the volume was crashed.
But it look like, that this disk was not mounted into this volume.
Quote:
DiskStation> mdadm --detail /dev/md2
/dev/md2:
Version : 00.90
Creation Time : Tue Feb 16 19:50:30 2010
Raid Level : linear
Array Size : 482046208 (459.72 GiB 493.62 GB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sun Apr 24 19:09:27 2011
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Rounding : 64K
UUID : 3d09f1a5:7854591e:980d0758:00ba2058
Events : 0.3
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 0 0 1 removed
2 8 35 2 active sync /dev/hdc3
DiskStation>
|
Quote:
DiskStation> fdisk -l
Disk /dev/sda: 249.9 GB, 249998918144 bytes
255 heads, 63 sectors/track, 30393 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 310 2490043+ fd Linux raid autodetect
/dev/sda2 311 375 522112+ fd Linux raid autodetect
/dev/sda3 392 30393 240991065 fd Linux raid autodetect
Disk /dev/sdc: 250.0 GB, 250058268160 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 310 2490043+ fd Linux raid autodetect
/dev/sdc2 311 375 522112+ fd Linux raid autodetect
/dev/sdc3 392 30401 241055325 fd Linux raid autodetect
DiskStation>
|
Quote:
DiskStation> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active linear sda3[0] sdc3[2]
482046208 blocks 64k rounding [3/2] [U_U]
md1 : active raid1 sda2[0] sdc2[2]
522048 blocks [4/2] [U_U_]
md0 : active raid1 sda1[0] sdc1[2]
2489920 blocks [4/2] [U_U_]
unused devices: <none>
DiskStation>
|
The md2 is not mountable:
Quote:
DiskStation> mount /volume1
mount: Mounting /dev/md2 on /volume1 failed: Invalid argument
DiskStation>
|
Quote:
DiskStation> e2fsck -y /dev/md2
e2fsck 1.41.3 (12-Oct-2008)
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md2
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
|
But the superblock looks correct, as in the first quote:
"Persistence : Superblock is persistent" - ?
It would be nice if I can recover the data. My colleague saves his private data on this NAS-device. We know, that was a failure, but that wouldnt help us at this point..
Thank you for any ideas!
|
|
|
04-25-2011, 03:54 AM
|
#2
|
Member
Registered: Mar 2006
Location: Austria
Distribution: Mandriva/Debian
Posts: 104
Rep:
|
Unfortunately not the 3rd (probably empty) disk has gone missing, it looks like the 2nd (/dev/sdb) one.
A linear array doesn't give any advantage in terms of reliability and speed, but instead it
multiplies the probability of failures.
You may run testdisk on a clone of /dev/sda alone to get back at least "some" of the data.
|
|
|
04-25-2011, 06:59 AM
|
#3
|
LQ Newbie
Registered: Apr 2011
Posts: 2
Original Poster
Rep:
|
Yes, thats true.. The second disk is gone, and we removed it.. (it couldnt start - spin up.. klack, klack, klack.. and spin down..)
As I know, JBOD or the linear layout as it named, places the files complete on one disk, and only the files on the damaged hdd are lost.
so im thinking about recreate the software raid with
Quote:
mdadm --create /dev/md2 --level=linear --raid-devices=2 /dev/sda3 /dev/sdc3
|
Im not sure, if this is a good idea, because i dont want to format my partitions for a new raid-configuration.. If this command setup a new layout without touching the partitions im happy and would try this..
so, I appreciate any help
|
|
|
04-26-2011, 02:32 PM
|
#4
|
Member
Registered: Mar 2006
Location: Austria
Distribution: Mandriva/Debian
Posts: 104
Rep:
|
You do not need to create a new array to recover the remainders of your data.
Just clone the remaining disk with data, mount /dev/sda3 as a single partition,
do NOT run fsck, let it mount readonly and recover what you can get. I am using
software raid as well and that has worked fine for me whenever necessary.
A while ago, I could recover most of the data of a dying nas device with two disks,
with linear "raid" as well, by running dd_rescue to a network storage, from
attaching the disks to/into a linux box.
Running the dd_rescue took almost a week day/night, since the read speed of the
affected disk was some kBytes/sec only, in parts. Though some sectors were missing,
I could then mount the image.
If you have a partitioned raid set, you may want to use fdisk to change the
partition descriptor from fd (raid) to 83 first, for the proper partition.
|
|
|
All times are GMT -5. The time now is 10:45 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|