LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 09-28-2018, 07:16 AM   #1
mfoley
Senior Member
 
Registered: Oct 2008
Location: Columbus, Ohio USA
Distribution: Slackware
Posts: 1,486

Rep: Reputation: 123Reputation: 123
How to make 2nd drive (/dev/sdb) bootable


I have a Slackware 14.2 system with 2 1TB drives installed. I'd like the 2nd drive to be bootable in the event of failure of the 1st drive. Yes, I know I can use a mdadm RAID-1 for this, and I do have such RAIDs on 3 other computers, and I may do that on this one too eventually, but for the moment, it's not a RAID -- if for no other reason that I'd like to see if this works.

I am "cloning" the contents of the /dev/sda boot partition to the 2nd drive as (where the 2nd drive is mounted under /mnt/image):
Code:
rsync -aHAxv --delete --exclude=lost+found --one-file-system  / /mnt/image/
I can set the BIOS up to have to boot order as: DVD, drive 1, drive 2. As to making the drive bootable, I'm using lilo. Would the following be correct:
Code:
# Change drive in lilo.conf
cp -p /mnt/image/etc/lilo.conf /mnt/image/etc/lilo.sda
sed 's#/dev/sda#/dev/sdb#' /mnt/image/etc/lilo.sda >/mnt/image/etc/lilo.conf

mkdir -p /mnt/image/proc
mkdir -p /mnt/image/sys 
mkdir -p /mnt/image/dev 
  
# Set linux loader on target drive

mount --bind /proc /mnt/image/proc
mount --bind /sys /mnt/image/sys  
mount --bind /dev /mnt/image/dev  
chroot /mnt/image

lilo -M /dev/sdb 
lilo -b /dev/sdb 
exit

umount /mnt/image/dev
umount /mnt/image/sys
umount /mnt/image/proc
umount /mnt/image
One thing I wonder about, if drive 1 failed would drive 2 still be /dev/sdb?

Last edited by mfoley; 09-28-2018 at 07:27 AM.
 
Old 09-28-2018, 07:35 AM   #2
BW-userx
LQ Guru
 
Registered: Sep 2013
Location: MID-SOUTH USA
Distribution: Slackware 14.2 / Slackware 14.2 current / Manjaro / Parrot
Posts: 7,375

Rep: Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488
BIOS: boot order, look for first drive for a bootable drive, then go to next assigned drive to see if it is bootable... etc

on the making the hdd bootable, have a working OS installed, and I'd install grub onto its MBR or EUIF boot dir ( not tested by me ), so when the BIOS looks to that hdd in its order it will see that grub boot listings then it should boot off of it.

So now you have a grub installed on both HDD's. the primary hdd grubs will always get used because BIOS is using that hdd for the primary boot hdd, if it fails, it will move to the next on the list, and preform the same, looking for a bootable system. (lilo)

Quote:
One thing I wonder about, if drive 1 failed would drive 2 still be /dev/sdb?
good question, I have no idea. you could use UUID's in your fstab for one or both to circumvent that issue.

the way to test this if you do not want to set up UUID, is assign BIOS your first boot hdd, where it should be at already, then your 2dn hdd for the next in the list, then plug in a non-bootable hdd to your primary, then wait to see what it calls your second in the list of bootable media when it boots that. Which would be your other hdd.

you could even use your already primary OS hdd for that test. just swap them around between a bootable and non-bootable against your boot listing in your BIOS.

Last edited by BW-userx; 09-28-2018 at 07:45 AM.
 
Old 09-28-2018, 09:57 AM   #3
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian 9 Stretch
Posts: 2,349
Blog Entries: 8

Rep: Reputation: 383Reputation: 383Reputation: 383Reputation: 383
Quote:
Originally Posted by mfoley View Post
One thing I wonder about, if drive 1 failed would drive 2 still be /dev/sdb?
No, unless you install a new drive to be a new /dev/sda. If it's the only hard drive in the system, it will be /dev/sda.

This is a reason to do things using UUIDs instead of device names. Using UUIDs is generally more robust, but it depends on what precisely you're trying to do.

In your use case, I'm imagining the following scenario:

1) You have two different drives, each with just one partition - /dev/sda1 and /dev/sdb1. /dev/sda1 is the main OS partition; /dev/sdb1 is the backup.

2) You have already used rsync to copy over the files; you will occasionally use rsync to keep things sync'd up from the main drive to the backup. (Personally, I love this approach more than using RAID1. It allows the second drive to spin down, and can save me from an "oops" mistake.)

3) Someday, one of the drive fails. If it's sdb, just shut down at some convenient time and remove the failed backup drive. If it's sda, though...

4) Failure of sda means the OS is hosed, so you'll likely need to hard power down the computer. Okay, this is a bit worse than RAID1 behavior, but it's rare enough that I'm okay with that. If I really care about maximal uptime, I use RAMBOOT (if the computer's RAM fails, then there's just no staying).

5) After hard powering down, remove sda and power back on

Okay, at this point "sdb" becomes sda. Assuming /etc/fstab and LILO are configured to use device names instead of UUIDs, the changed UUID of the sda1 partition does not matter. In this use case, it's actually better to avoid using UUIDs. It lets you just swap in sdb and everything just plain works. I'm not familiar with how LILO works, so I don't know if this is actually how you can do it.

The alternative is to do everything the UUID way. In this case, you'll need to set up rsync to exclude etc/fstab, and possibly other boot related things (I could tell you off the top of my head what things this would be with GRUB2 and Debian, but not LILO and Slackware.)

One other possibility is to truly clone the partition, giving the backup partition the same UUID. This can confuse things if both hard drives are installed in the same computer at the same time, so this works best if the backup drive is in another computer. You use nfs or networked rsync to periodically sync them up. This can be a great way to minimize your configuration headaches or potential points of confusion. It also means that a failure which would have destroyed both hard drives (such as flooding) would still leaves the backup drive intact.
 
1 members found this post helpful.
Old 09-28-2018, 11:24 PM   #4
mfoley
Senior Member
 
Registered: Oct 2008
Location: Columbus, Ohio USA
Distribution: Slackware
Posts: 1,486

Original Poster
Rep: Reputation: 123Reputation: 123
So, I tried various things. I made sure the lilo.conf and fstab on sdb2 referenced sdb2. I reformatted /sda2 to ext4 so no boot files, etc. I rebooted and the system did start to boot, but ended with "Kernel panic - not syncing: No working init found. Try passing init= option to kernel ...".

What I don't know for sure is whether it really started booting off sdb2, or if the MBR boot sector on sda still pointed to some physical sector on sda2 which still contained the boot image despite my reformat (which did not wipe the data). Nevertheless, trying to boot off sdb in this way seems problematic. I wanted to leave drive 1 in there to simulate a failed or corrupt drive, but this still doesn't answer to what would happen if the drive was failed completely -- would Linux then recognize drive 2 as sda? It seems highly probably that if drive 1 fails the computer would not reboot and I'd have to physically swap drive cables, boot off DVD, chroot, lilo, etc.

As far as using UUIDs, that's doable in fstab, but I don't think so in lilo.conf. It wants the actual device there, esp. since the UUID is by partition and the lilo 'boot =' directive want the device (/dev/sda).

All of these issues, including how to configure the BIOS, lilo.conf and fstab, and drive swapping, go away if the two drives are simply mdadm RAID-1 members. Which I went ahead and configured and that is working fine. I have a script that monitors the status of the RAID and emails me if something fails.

As for grub versus lilo, I have the following quote, for which I unfortunately did not note the reference:

"... apparently the lilo boot loader will update the MBR (Master Boot Record) of both/all RAID array members on the boot device whereas this does not seem to be the case with grub. This means the even if the “main” boot disk (/dev/sda) fails, the other RAID-1 member will still be able to boot. This is important for the ability to hot-swap the failed drive and permit mdadm to rebuild the replaced drive."
 
1 members found this post helpful.
Old 09-29-2018, 02:44 AM   #5
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian 9 Stretch
Posts: 2,349
Blog Entries: 8

Rep: Reputation: 383Reputation: 383Reputation: 383Reputation: 383
As I noted before, the current sdb will become sda if the current sda fails completely or is removed. It does not matter if you try to swap cables or whatever. If it's the only functioning hard drive in the system, it will be sda no matter what SATA port it is connected to.

The behavior of grub-install is indeed to install grub only on the (one) device specified. You can manually run grub-install on as many other devices as you want, though, so I really don't see why that's a problem.
 
Old 09-29-2018, 08:22 AM   #6
BW-userx
LQ Guru
 
Registered: Sep 2013
Location: MID-SOUTH USA
Distribution: Slackware 14.2 / Slackware 14.2 current / Manjaro / Parrot
Posts: 7,375

Rep: Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488Reputation: 1488
as I stated, install a Separate boot onto your backup hdd MBR, as stated by IsaacKuo . If sda fails and the secondary becomes sda by assignment. then in a lilo on your secondary MBR should it not then be referenced as sda and not sdb.

You might have to make it fail on your first hdd to figure out how it looks at your secondary when it takes the frist's place, then write your secondary lilo to conform to that fail state, not as if it is a primary hdd. because it should be using that hdd's MBR lilo to boot from.

my thinking is that it is BIOS that says hdd 1 is in fail state, go to next on list. finds the 2nd hdd, then uses that MBR bootloader to boot the system install on it.

I hope that makes sense, it does to me.


I use grub(2) it's a little more easier, but here is something on lilo and uuid.

How to configure fstab and lilo.conf with persistent naming
 
Old 09-29-2018, 11:05 AM   #7
enorbet
Senior Member
 
Registered: Jun 2003
Location: Virginia
Distribution: Slackware = Main OpSys for decades while testing others to keep up
Posts: 2,016

Rep: Reputation: 1914Reputation: 1914Reputation: 1914Reputation: 1914Reputation: 1914Reputation: 1914Reputation: 1914Reputation: 1914Reputation: 1914Reputation: 1914Reputation: 1914
Personally I find grub to be way overly complicated to do a very simple job so as long as LILO remains capable of doing that job I see no reason to install/use grub. LILO has several benefits not the least of which it is easy to install it to any MBR or root partition making redundancy and recovery quite simple. It can be setup with a menu item for the same system designated in one entry as "sda" and in another entry as "sdb" by use of copying the kernel to the drive that has LILO installed in the MBR and using the "OTHER" or "chainload" designation. It can even hand off from one LILO to another with it's own entries. Beyond simple recovery for drive failure I do this to always have a working kernel option for trial distros, often on other drives, even external ones, that routinely automate the updating of the kernel. I always have the Fallback version this way.

LILO can handle both UUID and LABEL formats provided they are properly declared. IMHO there is nothing of any base boot function that grub can accomplish that LILO cannot. I have no issues with folks who, for whatever reason, prefer grub, but the OPs question and concern in no way implies the only solution is to switch boot loaders. There are numerous ways to solve OPs concerns with LILO.
 
1 members found this post helpful.
Old 09-29-2018, 11:16 AM   #8
colorpurple21859
Senior Member
 
Registered: Jan 2008
Location: florida panhandle
Distribution: slackware64-current, puppy, ubuntu
Posts: 2,509

Rep: Reputation: 371Reputation: 371Reputation: 371Reputation: 371
Boot into system installed on second harddrive install whatever bootloader you want to second hard drive mbr use uuids to prevent dev name problems
 
Old 09-30-2018, 11:21 AM   #9
mfoley
Senior Member
 
Registered: Oct 2008
Location: Columbus, Ohio USA
Distribution: Slackware
Posts: 1,486

Original Poster
Rep: Reputation: 123Reputation: 123
The problem with the "mirrored" drive approach (versus RAID) I had in the past it that the failed boot drive does not always fail in the sense that the BIOS detects it as a physical failure. If the BIOS simple can't boot from it (corrupt boot sectors?) it should then move on to the next configured boot device. In this case I don't think drive 2 will end up as sda. I have a long-ago memory of this being the case which is why I started changing the fstab and lilo.conf on the 2nd drive to sdb. So, perhaps it depends on HOW drive 1 fails. If it fails such that the BIOS does not recognize a drive as installed, perhaps drive 2 will become sda. Or perhaps everything I just said applies to the olden-days non-SATA drives -- which is probably when I did those initial experiments -- and drive 2 always now becomes sda. It would be interesting to conduct more experiments, but as mentioned, I've gone ahead and configure this system as a RAID-1 and I have tested that with a failed drive 1 and it does still boot since lilo installs the MBR on both devices.
 
Old 09-30-2018, 11:47 AM   #10
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian 9 Stretch
Posts: 2,349
Blog Entries: 8

Rep: Reputation: 383Reputation: 383Reputation: 383Reputation: 383
My experience is that the difference is not such a big deal. I mean...let's say sda fails. Since the OS is running from it alone (no RAID), the computers going to inevitably crash soon. So, a shutdown is inevitable. At that point, might as well remove the failed drive.

With the failed drive physically removed, the other drive becomes sda no matter what, so...well, I just planned for things to work that way in advance. For the most part I switched to the UUID way of doing things, though.

I don't really stress out about setting up bootability too much. If worse comes to worse, I can just boot up from a USB install and run "update-grub" for it to automatically generate boot menu entries for any (working) hard drive partitions. Then, I just reboot and select the desired hard drive partition to boot up from; I run install-grub if necessary or whatever. I mean sure - it takes longer to do this than to have prepared things properly in the first place. But the point is, it works even if it wasn't prepared properly. Since I know I've got a solid backup plan, I don't stress out about having everything pre-prepared exactly right.
 
Old 09-30-2018, 06:22 PM   #11
mfoley
Senior Member
 
Registered: Oct 2008
Location: Columbus, Ohio USA
Distribution: Slackware
Posts: 1,486

Original Poster
Rep: Reputation: 123Reputation: 123
IsaacKuo: I hear you. I suppose it all depends on needs. With the non-RAID solution, a failed sda is, as you say, "going to inevitably crash soon." And "soon" typically would be within seconds of the failure as, presumably, the drive is constantly being updated during normal running. In my case, this computer is a remote backup and not only am I not physically there, it is located at some distance from the office. With the RAID setup, the system will continue to operate, even boot, but the RAID status will show a failed member. I can therefore replace the failed member at my leisure. I've got the RAID configuration going on 3 other computers, all of which (but not this one) also have the drives installed in hot-swap bays -- no disassembly required. I might even put hot-swap bays in this remote backup machine, but no rush.
 
Old 09-30-2018, 07:56 PM   #12
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian 9 Stretch
Posts: 2,349
Blog Entries: 8

Rep: Reputation: 383Reputation: 383Reputation: 383Reputation: 383
FWIW, you might be surprised how long GNU/linux can limp along with a failed / partition. I have a lot of nfs booting machines with / on an nfs share. I've accidentally rebooted or shut down the nfs server without first shutting down or suspending the clients enough times to have learned what happens.

They can actually "survive" the loss and return of / for quite some time. At this point, I'm comfortable enough with the effects that I'll purposefully reboot the nfs server without doing anything special about the clients sometimes. At least with nfs, various actions which attempt to access the / file system will typically just hang there waiting patiently for the file access to eventually succeed. For example, if I right click something where a pop-up menu is supposed to appear, it'll just...well, eventually when the nfs server comes back up the pop-up menu will appear.

In any case, if the / partition is gone for good and never coming back, the functionality of the system is immediately very crippled. You're not going to be able to do anything to rescue it without a shutdown at that point, so whatever.
 
Old 10-03-2018, 02:48 AM   #13
FlinchX
Member
 
Registered: Nov 2017
Distribution: Slackware Linux
Posts: 252

Rep: Reputation: Disabled
Slightly related, so probably not worth making another thread just for it. Back ago I bumped into Slackware64-14.2 installer not letting me choose between an EFI partition on 1st drive and an EFI partition on 2nd drive on an UEFI computer with two hard drives. Since I'm not following -current, I wonder if that addition was included and will be available in 15.0.
 
Old 10-04-2018, 12:42 AM   #14
mfoley
Senior Member
 
Registered: Oct 2008
Location: Columbus, Ohio USA
Distribution: Slackware
Posts: 1,486

Original Poster
Rep: Reputation: 123Reputation: 123
So, are you saying that the installer (setup) would not let you install Slackware on the 2nd drive? ... assuming the 2nd drive was partitioned and formatted, of course ...
 
Old 10-04-2018, 02:14 AM   #15
FlinchX
Member
 
Registered: Nov 2017
Distribution: Slackware Linux
Posts: 252

Rep: Reputation: Disabled
Quote:
Originally Posted by mfoley View Post
So, are you saying that the installer (setup) would not let you install Slackware on the 2nd drive? ... assuming the 2nd drive was partitioned and formatted, of course ...
I am saying that the installer of 14.2 did not let me choose the EFI partition on the second hard drive to put elilo there. I run Slackware from the second hard drive, but the EFI partition is on the first drive.
 
  


Reply

Tags
boot issues, lilo


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] cannot boot from grub USB thumb drive: /dev/sdb not detected... X-LFS-2010 Linux - Newbie 21 05-30-2018 08:20 PM
cannot boot from grub USB thumb drive: /dev/sdb not detected X-LFS-2010 Linux From Scratch 1 05-29-2018 12:22 PM
/dev/sdb hard drive issues. shital_064 Linux - Hardware 6 06-19-2012 05:21 AM
Can I make symlink to /dev/sdb, and if so, how? linus72 Linux - Newbie 15 06-21-2009 08:20 AM
moving bootable drive from /dev/hde to /dev/hda chromedog Red Hat 1 02-08-2006 05:31 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 11:33 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration