Debian server won't boot after reseating drives (no bootable device)
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Debian server won't boot after reseating drives (no bootable device)
Hello guys
I have a server with 8 hdds on raid 1 running on Debian.
I ran into a problem, a few weeks ago I was rearranging my drives in my debian file server, I took them all out of the case without taking a picture first...
And now I reseated them back into the case (probably not in the right sata spot where they were before...)
And when I try to boot, I'll get the error "no bootable device found"...
Do any of you know how to fix this. It might be a stupid mistake, but I'm trying to learn :-)
After this, fstab might or might not mount the root partition correctly. If it is UUID based, it might. If not, you might need to edit /etc/fstab on the root disk to correct this. I am emphasizing root disk because you are running on a live USB. You need to mount this disk the root file system is on and edit /etc/fstab there. Not on the USB.
from another forum (i hate it when people double-post) i know that there's a RAID problem present.
knowing nothing of raid, all i can offer is a search for "linux raid recovery".
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
The OP did not state if the boot disk is also on RAID1. It can be because Debian can boot from RAID1.
If "no boot device found" error occurs, no boot device has been found. That is before RAID assembly or repair becomes in picture. Once a boot device has been found and the machine starts to boot, it will try to assemble the RAID. If it cannot, it will start with a degraded RAID, but it will still start.
The most common mistake I have seen in mdadm RAID1 configurations is that during installation it is overlooked to write a boot sector on both disks. Redundancy is there, but no boot redundancy. The problems is minor, because you can boot from a live USB and the RAID will assemble.
RAID5 does not have this problem because you can't boot from RAID5 anyway.
I hope the OP did not have a 4+4 RAID1 configuration with all disks now in random order. That is an interesting problem to state it nicely.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
I found and checked the other forum. Some members were so smart to question the type of RAID (hardware RAID, fake RAID or mdadm), something which I overlooked. But fortunately there was at least one member who suggested to boot from a live USB and examine the disk contents. Like I did.
Since the OP does respond on the other forum I assume he will continue to discuss solutions there.
Now I hope it was mdadm he had, because mdadm arrays are virtually indestructible. As in you can hardly damage disks so much that it will not assemble anymore. I mean damaging the contents, not physically.
However... this being a server with probably an installed RAID controller it could have been HW RAID. No idea what the RAID controller with embedded proprietary software has done to disks in the wrong sequence. That is why I hate this proprietary stuff.
OTOH, if the system does not boot, it will let you access the RAID manager from the BIOS. Just for interest I will follow the other forum to see how this develops.
After this, fstab might or might not mount the root partition correctly.
Fstab is parsed much later in the boot process. After root is already mounted by the kernel/initramfs. We have for some reason the real_root kernel parameter which was previous called root. We also utilise dolvm and rootfstype sometimes for this purpose
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.