raid 1 badly detected as raid 0 when one drive is missing
I'm learning raids, so maybe this is some basic question, but it's not covered anywhere...
When I create raid 1, update /etc/mdadm/mdadm.conf as[1], run update-initramfs -u, I can reboot and mount it. Everything is fine. Now I remove one drive, and reboot, to simulate critical failure. raid will be wrongly detected as raid 0 (WHY?), inactive (WHY? because we "just have half of raid0?) and as such cannot be used. What I expected to see was active, degraded drive, not this fatal. What's wrong? See [2] for error state description. Related question: why mdadm.conf [1] contains devices=/dev/sdb1,/dev/sdc1 if allegedly all partitions (resp. ones defined in DEVICE variable) should be scanned for raid UUID? So why is this part generated? What is its use and why isn't there partition UUID used instead? Could it even be used here? [1] mdadm.conf Code:
cat /etc/mdadm/mdadm.conf [2] errorneous state: Code:
root@mmucha-VirtualBox1:~# cat /proc/mdstat |
I can't help you regarding that matter but I suggest you use another username because:
|
I don't understand. I created this account some time ago and I have this account&username now. Definitely nothing blocked me from doing so, so I supposed it's OK. Can someone who know answer reply to me, or it's againts house rules?
|
Quote:
|
I would suggest you spend some time reading Linux_Raid.
That wiki has an article that explains that (in-kernel) auto-detect was removed quite a while ago. If you are using x'FD' on your partitions you may be confusing the initrd. I'd boot a liveCD and check the degraded array - see if it is detected properly. |
I'm (raid) newbie, so I can just follow tutorials/documentation which is mostly out of date. Do you have up-to-date tutorial/how to?
also I don't follow that advice with autodetect: in mdmadm.conf is explicitly said: level=raid1 so where does autodetection come from? and please, what is x'FD'?? no idea. |
All times are GMT -5. The time now is 09:01 AM. |