LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (http://www.linuxquestions.org/questions/linux-server-73/)
-   -   How to force assemble ICH9R with mdadm (http://www.linuxquestions.org/questions/linux-server-73/how-to-force-assemble-ich9r-with-mdadm-926260/)

troy.mcclure 01-28-2012 11:50 PM

How to force assemble ICH9R with mdadm
 
Hello,

I have an Intel on-bios fake raid0, using 4 hard drives (out of 10 I have in my machine).
I woke up one morning while the computer is stuck. Rebooting showed that one HD is erroneous.
What I'd like to do is to force mdadm to mount the array, and save as much data as possible.
The 4 disks I'd like to assemble are sda sdb sdc and sdd.
As you'll see below, mdadm knows to tell that there is a raid volume called OS, but I cant find it in anywhere /dev.
Here is some info:

root@euler:~# mdadm --assemble --force --scan
<returned nothing>

root@euler:~# mdadm --misc -E /dev/sda
/dev/sda:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.01
Orig Family : a805c823
Family : a805c823
Generation : 000000ba
UUID : bb3b64ec:165c130f:d32dfd0a:e7e8d635
Checksum : bc7132a7 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 1

Disk01 Serial : WD-WCATR6768155
State : active
Id : 00000000
Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

[OS]:
UUID : 538ded6b:3f2e6505:ab3e20e2:f5a68586
RAID Level : 0
Members : 4
Slots : [_UUU]
This Slot : 1
Array Size : 2097152000 (1000.00 GiB 1073.74 GB)
Per Dev Size : 524288264 (250.00 GiB 268.44 GB)
Sector Offset : 0
Num Stripes : 2048000
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean

Disk00 Serial : WD-WCATR6768176
State : active failed
Id : 00010000
Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

Disk02 Serial : WD-WCATR6739549
State : active
Id : 00030000
Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

Disk03 Serial : WD-WCATR6737599
State : active
Id : 00020000
Usable Size : 1953520654 (931.51 GiB 1000.20 GB)
root@euler:~# mdadm --misc -E /dev/sdb
/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.01
Orig Family : a805c823
Family : a805c823
Generation : 000000ba
UUID : bb3b64ec:165c130f:d32dfd0a:e7e8d635
Checksum : bc7132a7 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 1

[OS]:
UUID : 538ded6b:3f2e6505:ab3e20e2:f5a68586
RAID Level : 0
Members : 4
Slots : [_UUU]
This Slot : ?
Array Size : 2097152000 (1000.00 GiB 1073.74 GB)
Per Dev Size : 524288264 (250.00 GiB 268.44 GB)
Sector Offset : 0
Num Stripes : 2048000
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean

Disk00 Serial : WD-WCATR6768176
State : active failed
Id : 00010000
Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

Disk01 Serial : WD-WCATR6768155
State : active
Id : 00000000
Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

Disk02 Serial : WD-WCATR6739549
State : active
Id : 00030000
Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

Disk03 Serial : WD-WCATR6737599
State : active
Id : 00020000
Usable Size : 1953520654 (931.51 GiB 1000.20 GB)
root@euler:~# mdadm --assemble --uuid=bb3b64ec:165c130f:d32dfd0a:e7e8d635
mdadm: an md device must be given in this mode

root@euler:~# blkid
/dev/sda: TYPE="isw_raid_member"
/dev/sde1: UUID="8d354cbd-101c-e309-bd80-ffbe617b6611" UUID_SUB="02d88cb5-f7ff-010a-7973-681f59eb4bf4" LABEL=":OS5D" TYPE="linux_raid_member"
/dev/sde2: UUID="1211e044-a4f4-264c-7a03-150efa02e9e8" UUID_SUB="a0f267bd-8d75-1f3a-4ac4-a25b66b71f45" LABEL=":SWAP" TYPE="linux_raid_member"
/dev/sdf1: UUID="8d354cbd-101c-e309-bd80-ffbe617b6611" UUID_SUB="793accb5-75e2-8b48-dba4-e5032d3c36d2" LABEL=":OS5D" TYPE="linux_raid_member"
/dev/sdf2: UUID="1211e044-a4f4-264c-7a03-150efa02e9e8" UUID_SUB="9cd61b6e-63c0-198b-8806-8fc2c05c7fde" LABEL=":SWAP" TYPE="linux_raid_member"
/dev/md127: LABEL="OSR6D6x5" UUID="46dc0b12-74a0-4d1d-8651-0fd87c168c71" TYPE="ext4"
/dev/md126: UUID="460d44b2-776c-4cf7-bbba-bb8441163366" TYPE="swap"
/dev/sdg1: UUID="8d354cbd-101c-e309-bd80-ffbe617b6611" UUID_SUB="d9040cac-5669-395b-b632-f159698efeb6" LABEL=":OS5D" TYPE="linux_raid_member"
/dev/sdg2: UUID="1211e044-a4f4-264c-7a03-150efa02e9e8" UUID_SUB="8d097e3d-94cc-7c4e-d7c6-340c3e85db78" LABEL=":SWAP" TYPE="linux_raid_member"
/dev/sdh1: UUID="8d354cbd-101c-e309-bd80-ffbe617b6611" UUID_SUB="63cb96af-1c04-5e8f-e719-f163296874a3" LABEL=":OS5D" TYPE="linux_raid_member"
/dev/sdh2: UUID="1211e044-a4f4-264c-7a03-150efa02e9e8" UUID_SUB="aa555af2-294d-9ba3-9204-f7450d8fd4e5" LABEL=":SWAP" TYPE="linux_raid_member"
/dev/sdi1: UUID="8d354cbd-101c-e309-bd80-ffbe617b6611" UUID_SUB="4d3b7a9c-13c5-0eb1-babc-9d45fd46dad9" LABEL=":OS5D" TYPE="linux_raid_member"
/dev/sdi2: UUID="1211e044-a4f4-264c-7a03-150efa02e9e8" UUID_SUB="25e33250-0fce-5609-5ad2-f59388f1b0cb" LABEL=":SWAP" TYPE="linux_raid_member"
/dev/sdj1: UUID="8d354cbd-101c-e309-bd80-ffbe617b6611" UUID_SUB="b16ec20b-8d61-1bbf-8b3b-c0228b00dc4c" LABEL=":OS5D" TYPE="linux_raid_member"
/dev/sdj2: UUID="1211e044-a4f4-264c-7a03-150efa02e9e8" UUID_SUB="4c1fa322-7f78-53af-f5dd-945031605e7a" LABEL=":SWAP" TYPE="linux_raid_member"
/dev/sdk1: UUID="9290-402E" TYPE="vfat"

Please note that the volumes OS5D and SWAP (md126 & md127) are not related to the problem and work properly. They're not ICH9 but linux raid.
OS is the problematic ICH9R.
All I'd like is to force-mount it and save everything possible

I'd appreciate your help, it's so extremely important..

Reuti 01-29-2012 10:57 AM

Independent whether it’s a fake RAID or software/hardware RAID: with RAID0 you got no redundancy. If it’s not a typo, I fear there is not much you can recover.


All times are GMT -5. The time now is 06:11 AM.