Issues mounting RAID drive
This is my first time setting up a RAID array, and I am still learning how to work on Linux, so please excuse me if i have gone and done something horribly wrong. . .
I have a RAID 10 array split between 4 identical 1TB drives. While I was out of town, I noticed that I could no longer access my data (this is set up as a file server). When I got home i noticed a message from S.M.A.R.T saying that drive 1 has failing. (I have already sent out for a replacement from Seagate.) Now the entire array is inactive and i cannot seem to get it to come back up. Whenever I try to mount using "mount /dev/md0" i get the message "Mount: wrong fs type, bad option, bad superblock on /dev/md0..." I am not sure if this is because of the failing disk (which is still attached) or due to another problem. Can somebody please give me some direction here? I will post the outputs from mdadm --examine for all 4 drives below. Code:
badwolf@BadWolfNAS:~$ sudo mdadm --examine /dev/sda Code:
badwolf@BadWolfNAS:~$ sudo cat /proc/mdstat |
Actually, more information would be helpful. What flavor of Linux do you run? How exactly is your RAID set up? Is it a RAID controller that may be accessed via BIOS during POST?
|
Sorry, I run Ubuntu Server 14.04.1 LTS, and I am running a software RAID through Ubuntu which I configured through Webmin. I do not have access to the array during POST.
|
Puzzling
Quote:
Quote:
Quote:
|
Sorry for the late response, it's been a long week. Clearly something has gone awry here. What can I do to rebuild my RAID without losing all of my data? And if I can't rebuild, is there any way to recover the data onto another drive so I can start over? Much of this data does not have a backup, as this was my solution when I ran out of space on my other machines. BTW, In case it's not clear, there should be 5 drives total on this system - a small, independent primary drive that holds the OS and such, and 4 1-TB drives in a RAID 10 configuration.
also, The last drive listed in the Mdadm examine is the one that is failing and will be replaced. |
What is the output of
Code:
mdadm --detail --scan And for each device in the result, post the output of Code:
mdadm --detail <device> Code:
# mdadm --detail --scan |
mdadm --detail --scan returns
Code:
mdadm: md device /dev/md0 does not appear to be active. |
Does "mdadm --detail /dev/md0" return anything?
|
no, it returns the same thing
|
update
So I have been trying to research this on my own as well, and while doing so have rebooted a couple of times and re-run a few things and have noticed that some of the outputs have changed, specifically the contents of mdstat. Here they are just in case it helps make sense of anything.
mdstat Code:
badwolf@BadWolfNAS:~$ sudo cat /proc/mdstat the drive that needs to be replaced (according to S.M.A.R.T.) is sda (disk 0) this may be of use as well. . . Code:
badwolf@BadWolfNAS:~$ sudo mdadm --assemble --scan --verbose |
All times are GMT -5. The time now is 10:08 AM. |