LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 11-28-2011, 02:31 PM   #1
crazy4nix
LQ Newbie
 
Registered: Sep 2011
Posts: 6

Rep: Reputation: Disabled
Raid1 degraded after reboot


After setup, cat /proc/mdstat output looks like this:

Code:
proxmox:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc2[1] sdb2[0]
      293024832 blocks [2/2] [UU]

unused devices: <none>
Also, after I setup raid1 fresh, i got the following:

Code:
proxmox:~# mdadm --examine --scan
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=fbda4051:61cbc27f:7f2b1f39:e153e83f
But, after reboot, cat /proc/mdstat outputs:

Code:
proxmox:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active (auto-read-only) raid1 sdc[1]
      293024832 blocks [2/1] [_U]

unused devices: <none>
Why is it using sdc1 now?

Also, now I get:

Code:
proxmox:~# mdadm --examine --scan
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=fbda4051:61cbc27f:7f2b1f39:e153e83f
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=fbda4051:61cbc27f:9822ee23:9b948649

Code:
proxmox:~# dmesg | grep md0
md/raid1:md0: active with 1 out of 2 mirrors
md0: detected capacity change from 0 to 300057427968
 md0: p1 p2
md0: p2 size 586049840 exceeds device capacity, limited to end of disk
Where did the two partition on /dev/md0 come from? I never made them. Also, sdc1 and sdc2 aren't listed in the /dev tree.

Here is the fdisk output:

Code:
proxmox:~# fdisk -l /dev/sdb

Disk /dev/sdb: 300.0 GB, 300069052416 bytes
255 heads, 63 sectors/track, 36481 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x3bd84a48

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1           2       10240   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2               2       36482   293024920   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.

proxmox:~# fdisk -l /dev/sdc

Disk /dev/sdc: 300.0 GB, 300069052416 bytes
255 heads, 63 sectors/track, 36481 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x371c8012

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1           2       10240   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sdc2               2       36482   293024920   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
A bit of info: Server running Proxmox v1.9, which is debian lenny 64bit. sda is the system hard drive (hard ware RAID). sdb and sdc are 300GB brand new Raptor drives.
 
Old 12-01-2011, 12:06 PM   #2
hvulin
LQ Newbie
 
Registered: Sep 2009
Location: Velika Gorica, Croatia
Distribution: Debian
Posts: 15

Rep: Reputation: 0
Something (probably you) made partitions for software raid and then configured them in raid1 (mirror raid, everything is copied to both disks which are behind /dev/md0 device on which you make a filesystem or lvm or...)

But after reboot your sdb2 is missing so now mirror is degraded (still working but only 1 copy). btw. your sdb1 and sdc1 are probably errors since they are only 1 block small. If you still have those 2 disks and want to use them you should probably use mdadm to remove the array, then fdisk and recreate partitions and then mdadm to recreate the array (but with whole partitions).

something like:
# mdadm -S md0
# mdadm -r md0 sdc2
# mdadm -r md0 sdb2
# fdisk /dev/sdb
p (print partition table)
d 1 (delete sdb1)
d 2 (delete sdb2)
n (new)
p (primary)
1 (first)
enter, enter (from start to end)
t
fd (Linux raid autodetect)
w (write and exit)
# fdisk /dev/sdc
(repeat from above)
# mdadm -C -n 2 -l1
(now you have a new md0 consisting of sdb1 and sdc1)
# mkefs.ext4 /dev/md0
# mount /dev/md0 /mnt (to test)

(I probably have errors above but feel free to read the man pages: man mdadm, man fdisk, man mkfs.ext4)
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Install CentOS V5.6 onto pre-degraded RAID1 + LVM? goony1 Linux - Software 0 06-11-2011 01:06 AM
RAID1 issue related to Degraded mode management, md: md0 still in use. anuragccsu Linux - Kernel 0 01-18-2010 11:25 AM
raid1 mdadm repair degraded array with used good hard drive catbird Linux - Hardware 7 07-09-2009 12:31 AM
raid1 degraded after every boot personalsoft_fabiano Linux - Server 4 01-17-2009 04:19 AM
Degraded raid1 array problem on slack 12.0.0 / mdadm slack-12.0.0 Slackware 5 10-12-2007 06:36 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 06:01 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration