LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 12-22-2020, 01:31 PM   #1
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Rep: Reputation: Disabled
Raid problem, 1 of 4 disks dead, replaced with same, now what? Debian


I set this up 5 years ago [2015] with guru help. I am newbie.

Server crashed after years of not updating/grading, [didn't know how get in, do now] & power outages without power-backup system. Couldn't start it, operating system wouldn't boot.

Reinstalled OS after wiping [Crucial-ssd] drive. Got in and then figured out that one of 4 storage drives was bad. I don't know the original configuration of the RAID [?10, perhaps] and I am not familiar with command-line much to figure this out myself, want to learn. I don't know of there is information on the drives that I need, so I hope not to wipe the drives and start over.

-One drive 'type' is linux_raid_member
-Two drives 'type' are promise_fasttrack_raid_member
-Last drive is new and unlabled/typed [just installed to replace bad hd]

Hoping to add/mount new drive, reassemble raid, recover data, setup portainer, docker and understand maintaining and setting up the system better.

Any help, greatly appreciated, thank you in advance.
Code:
Rig=NAS drive:
LEPA case, 
w/500 watt EVGA power supply
4x - 3 TB Seagate desktop hard drives
Crucial 2.5" MX200* SSD 1Tb [OS]
Asus M5A99FX pro R2.0 motherboard
Openmediavault, Debian [new install, updated]
 
Old 12-23-2020, 07:20 AM   #2
berndbausch
LQ Addict
 
Registered: Nov 2013
Location: Tokyo
Distribution: Mostly Ubuntu and Centos
Posts: 6,316

Rep: Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002
If this is a software RAID, run the command mdadm --scan --detail to get details of the array, including the RAID level and its health. You can also get some information from cat /proc/mdstat.

The lsblk command can also give hints.

The loss of a single drive of four should not make the array unusable, unless its RAID 0.

Where do you get those drive types from? If they are partition types, they don't necessarily mean anything. You can use any partition type to assemble a software RAID.

Last edited by berndbausch; 12-23-2020 at 07:24 AM.
 
Old 12-23-2020, 12:55 PM   #3
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
berndbausch - Thanks for the quick response.

I ran 'mdadm --detail --scan' and got 'INACTIVE-ARRAY /dev/md127 metadata=1.2 name...'

'lsblk' returned all four disks with correct size and no mountpoint

'cat /proc/mdsat' returned 'md127 : inactive sdd[3](S)'

This
'-One drive 'type' is linux_raid_member -Two drives 'type' are promise_fasttrack_raid_member'
came from 'blkid' which doesn't show the new hard drive I dropped in. I thought it might give a clue as to whether this is a hardware, software or fakeRAID.

It might sound like I have a clue but I am just trying assemble the bits of info I am gleaning from a thousand different sources. You won't insult me if you dumbitdown, lol. Thanks again
 
Old 12-23-2020, 01:24 PM   #4
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
helpful?

Click image for larger version

Name:	IMG_20201209_095146_resized_20201209_101054489.jpg
Views:	20
Size:	247.6 KB
ID:	34988

Attachment 34989

Attachment 34990

Last edited by CJBIII; 01-13-2021 at 01:56 PM.
 
Old 12-23-2020, 10:01 PM   #5
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486
Those images show that md127 is raid0 (striped) so it had no hardware redundancy, and sdd was a member. I am afraid your data that was there is gone unless you had a recent backup. If it were raid1 (mirrored) the data would still be available with 1 disk. It appears sdb (your new drive) and sdd were apparently the 2 members of the raid0 array.

sda and sdc are the promise_fasttrak_array_member(s) but I am not familiar with that. It appears they are attached to a raid controller instead of using software raid and will need to be managed through the controller bios (accessible at boot time). Is that array visible and mounted when you do "ls /dev" or with "mount"?

It is possible that you had 2 arrays as raid0 using sdb & sdd for one and sda & sdc for the other. Then those arrays could have been assembled into another array as raid1. Looking at the config of the fasttrak array may help answer that question.

It is also possible that the raid function on the fasttrak card was disabled, so knowing what that config was is critical for working with the 3 original disks you have. Although the fact that the mdadm command only showed one array and one member drive leads me to believe the others are a hardware raid.

A search for the promise fasttrak member message locates a few bits of information about that controller and working with it. Using the controller model will assist.

Last edited by computersavvy; 12-23-2020 at 10:08 PM.
 
Old 12-24-2020, 12:11 PM   #6
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
It is possible that you had 2 arrays as raid0 using sdb & sdd for one and sda & sdc for the other.

This seems likely as I can't imagine us setting up an array without redundancy.

It is also possible that the raid function on the fasttrak card was disabled, so knowing what that config was is critical for working with the 3 original disks you have. Although the fact that the mdadm command only showed one array and one member drive leads me to believe the others are a hardware raid.

If anyone can give advice on how to find out whether the fasttrak was/is disabled and if it is possible to reassemble the raid and have it recognized. In the UEFI I see that AHCI is the setting in 'SATA Port1-4

Last edited by CJBIII; 12-24-2020 at 12:13 PM. Reason: additional information
 
Old 12-24-2020, 01:19 PM   #7
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486
Do the search I suggested and you likely will find the answer. It may also be found by searching for the manuals for the nas server.

I also told you that you could look at the raid config on the fasttrak controller from bios at boot time.

We can give you hints and point you in the way to go but ultimately it is up to you to follow through.

The info I have seen is that many of those promise controllers could only handle 2 disks in a raid array (either raid1 or raid0). You will have to do the research.

Last edited by computersavvy; 12-24-2020 at 01:22 PM.
 
Old 12-24-2020, 01:46 PM   #8
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
Thank you computersavvy for the points in the right direction.

I did the searches before I posted the first time and was unable to find a direction. The information seemed to be fifteen years old or more and I just was unable to make sense after reading several times.

By bios I take it you mean the UEFI. I have looked there and posted my findings hope to get a direction as to AHCI selection in the UEFI over the other possibiles (RAID and IDE). Nowhere does it say fasttrak nor was I unable to find what this is or how to manipulate the settings in the UEFI to gain more information about the original setup.

As I said, I am a newbie willing to do the work but I reread without an understanding on some issues. It is not a lack of willingness but a lack of background.

I see no raid controller card in the case. Where did promise fasttrak come from? Is it on the motherboard somehow? I see no direction to go in. I have done the research I can think of.

A search for the NAS sever? Does that mean the OS or the hardware? I assembled this from scratch with someone else and the OS (openmediavault) seems an unlikely place to deal with an array that I think was setup on the hardware (perhaps not?). I may be in the weeds with my thinking and I am willing to do the research. I hope no one thinks I am wasting their time. What seems a completely obvious solution to one is literally a foreign language to another.

Thank you again for the efforts and direction, I will do my best not waste anyone's time.

Last edited by CJBIII; 12-24-2020 at 02:17 PM. Reason: additional information
 
Old 12-24-2020, 03:33 PM   #9
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486
Quote:
Originally Posted by CJBIII View Post
Thank you computersavvy for the points in the right direction.

By bios I take it you mean the UEFI. I have looked there and posted my findings hope to get a direction as to AHCI selection in the UEFI over the other possibiles (RAID and IDE). Nowhere does it say fasttrak nor was I unable to find what this is or how to manipulate the settings in the UEFI to gain more information about the original setup.
No, the bios for the raid controller is different than the UEFI, but during boot you should be able to get to the management for it. The raid controller gets activated before the rest of the bios takes over since it has to make the array available to the OS during boot. If it was included as part of the NAS (possibly even built in) then you may be able to get some instructions on that by searching for a manual for the NAS. If it is an add-in card in a PCI slot then searching for info on the card itself may be productive. You will need the brand name and model for searching for info on either.

During boot is there any point during boot where you can select the raid array? Either before entering the UEFI bios or from some point within the UEFI screens? There should be if it is built in and may require a different key to access that than for the regular bios. It has been several years since I encountered that type card but it was relatively simple to access.

Quote:
As I said, I am a newbie willing to do the work but I reread without an understanding on some issues. It is not a lack of willingness but a lack of background.

I see no raid controller card in the case. Where did promise fasttrak come from? Is it on the motherboard somehow? I see no direction to go in. I have done the research I can think of.
If not a separate card then it is likely built in.
Quote:
A search for the NAS sever? Does that mean the OS or the hardware? I assembled this from scratch with someone else and the OS (openmediavault) seems an unlikely place to deal with an array that I think was setup on the hardware (perhaps not?). I may be in the weeds with my thinking and I am willing to do the research. I hope no one thinks I am wasting their time. What seems a completely obvious solution to one is literally a foreign language to another.

Thank you again for the efforts and direction, I will do my best not waste anyone's time.
Yes, the hardware information is needed. Brand and model should be adequate to find something related. The OS would not assist about managing built in devices.
The information you provided shows what is likely 2 arrays. One seems to be hardware on the fasttrak controller and the other was a software array that was broken by the disk failure.

UPDATE
I just reread your post and it mentioned AHCI, RAID, and IDE options. Logic says that maybe RAID is what is needed to enable and use that promise fasttrak raid array. AHCI would disable the raid controller.

Last edited by computersavvy; 12-24-2020 at 03:45 PM.
 
1 members found this post helpful.
Old 12-24-2020, 06:30 PM   #10
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
Very excited to get to work on this. Give me a little time to look all this up and I'll get right back to you.
 
Old 12-25-2020, 05:23 PM   #11
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
OMG! Thank you. Months of frustration and you have given me progress. I don't know what to do next but I am hopeful and grateful for the assistance. I will research and if you have anymore suggestions about the direction to go in...

Last edited by CJBIII; 01-13-2021 at 01:56 PM.
 
Old 12-25-2020, 05:44 PM   #12
TorC
Member
 
Registered: Dec 2020
Location: as far S and E as I want to go in the U.S.
Distribution: Fossapup64
Posts: 224

Rep: Reputation: 78
Interjection FYI, @CJBIII -- ZFS vs RAID, Linux performance benefits -- https://pcper.com/2020/05/zfs-versus...ance-benefits/

laptop or something acting up -- cannot paste URL
 
1 members found this post helpful.
Old 12-26-2020, 03:47 PM   #13
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
ZFS. I will look into it. Apparently there's a little controversy with Linus Torvalds :-).
 
Old 12-26-2020, 11:09 PM   #14
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486Reputation: 1486
Quote:
Originally Posted by CJBIII View Post
OMG! Thank you. Months of frustration and you have given me progress. I don't know what to do next but I am hopeful and grateful for the assistance. I will research and if you have anymore suggestions about the direction to go in...
Unfortunately no more suggestions until you tell us what progress you have made and what the next hurdle is.
for getting this far.

With 4 functional drives, if you don't need to save what was already on them, it would be easy to leave them as individual disks in bios and create a software raid6 array that would have the same space as your raid10 (6TB) and would give you protection and data preservation even should you see 2 drives fail simultaneously.
If you choose to do that the command (on my system) would be
Code:
 mdadm --create=md0 --raid-devices=4 --level=6 /dev/sd[a-d]
Note that md0 above can be any number you choose. Also what I put as "/dev/sd[a-d]" could also be put by itemizing the individual devices as "/dev/sda /dev/sdb /dev/sdc /dev/sdd". Use man mdadm for information on what each of those options does. (Hint: just change the level to 10 for your raid10 array)

Raid 10 is not that fault tolerant.

Once the array is successfully created you would need to use gparted or similar to create a usable partition, define its type and format it. I use LVM and thus there is a little more over head in creating the usable partitions with lvm tools (but it does not require using gparted for partitioning). It gives me the flexibility of not allocating the entire array to my data at the first go and the ability to expand the partition as needed.

I am now using 4 3TB drives in raid6 for my data. Earlier this year I converted from raid5 to raid6 when I installed an SSD for the OS and repurposed the spinning platter into the raid array. I have had 2 drive failures in that raid5 array over the past 5 years so I wanted the extra protection.

Last edited by computersavvy; 12-26-2020 at 11:18 PM.
 
1 members found this post helpful.
Old 12-27-2020, 12:48 PM   #15
CJBIII
Member
 
Registered: Dec 2020
Location: Oak Lawn, IL
Distribution: openmediavault5, debian
Posts: 41

Original Poster
Rep: Reputation: Disabled
In the bios, when I'm looking at the 'view drive assignments' page I see a list of the four drives. One, three and four appeared to be assigned with LD 1-1, LD 1-3 and LD 1-4 (one of the blue images I posted). The second one is listed under assignment as 'single disk'.

After doing some thinking, I have a few ideas. I wiped the SSD when I reinstalled the operating system. Does that mean that I do not have a configuration file (mdadm.config) to tell the operating system that this RAID 10 array this is to be attached or mounted at startup? (I'm just now learning about configuration files and what effect they have on startup.)

My understanding is that I have to set up the new hard drive, number two, the same way the other, failed drive, was set up. Partitioning? Then copying? My goal is to re-establish the raid10 and have it mounted at startup as it was previously. I fear that when I changed the AHCI to RAID in the bios and restarted that I may have erased the drives but they are still reading as the assigned array so I am a little confused.

If that all seems like muddled thinking, as I said, I'm restructuring my brain to think like this and am ever so grateful when someone tells me I still have to do more research in this direction or that.

The code you have put in there is starting to look familiar to me as I do more research. Thank you and I will do more research and consider the switch to RAID6 if I am unable to reassemble and run this one.

Quote:
Originally Posted by computersavvy View Post
Unfortunately no more suggestions until you tell us what progress you have made and what the next hurdle is.
for getting this far.

Once the array is successfully created you would need to use gparted or similar to create a usable partition, define its type and format it. I use LVM and thus there is a little more over head in creating the usable partitions with lvm tools (but it does not require using gparted for partitioning). It gives me the flexibility of not allocating the entire array to my data at the first go and the ability to expand the partition as needed.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] added a couple of new disks and now /dev/sd* disks are all messed up nass Linux - Hardware 4 10-30-2012 11:16 AM
[SOLVED] RAID 5 with 4 hard disks... array started with only 3 out of 4 disks kikinovak Slackware 9 08-11-2012 07:33 AM
[SOLVED] What implications does it have now that IDE disks are seen as "scsi" disks? harryhaller Slackware 8 03-28-2011 08:54 AM
Hardware & Software Q...replaced NIC with same card but....long cbjhawks Linux - Hardware 7 02-11-2010 04:34 AM
Raid Problem Fedora Core 3, RAID LOST DISKS ALWAYS icatalan Linux - Hardware 1 09-17-2005 04:14 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 06:15 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration