SUSE / openSUSEThis Forum is for the discussion of Suse Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Location: North of Boston, Mass (North Shore/Cape Ann)
Distribution: CentOS 7.0 (and kvm/qemu)
Posts: 91
Rep:
how to put /boot/efi both disk of RAID
Moving from CentOS to openSuSE Leap 15.2.
Have two 2TiB drives RAID'd and LVM'd
sd[ab]1 500MiB EFI SysPart /boot/efi
sd[ab]2 1.82 TiB RAID (/dev/md0) -- also /dev/system/home (LVM)
sd[ab]3 2 GiB SWAP
Several days trying to follow the directions
Finally booting with single user, ready to start with KVM.
But testing, I can boot off my first disk, but not my second.
My reading to date finally lead me to
Code:
dd if=/dev/sda1 of=/dev/sdb1
YaST Partioner looks good.
Code:
fdisk --list
looks good.
Doesn't boot off second drive.
What did I forget to to, or wrongly did?
How do I fix it from here?
I'm not very familiar with EFI, but I'm almost sure you can only have one partition with /boot/EFI
Because your BIOS is set to boot on your main Drive (/dev/sda) and it looks for a /boot/EFI partition on it. That's why I think you won't be able to boot to your Second Drive (/dev/sdb)
Location: North of Boston, Mass (North Shore/Cape Ann)
Distribution: CentOS 7.0 (and kvm/qemu)
Posts: 91
Original Poster
Rep:
Thanks, marav
My issue is, if, of the RAID disk-pair, it's the one that has the boot on it fails, I've lost my data because I can't boot from the other disk while I get around to replacing the failed disk.
I'll read your link tomorrow and see if it solves my problem.
Location: North of Boston, Mass (North Shore/Cape Ann)
Distribution: CentOS 7.0 (and kvm/qemu)
Posts: 91
Original Poster
Rep:
jefro -- sda is the drive RAID'd to sdb; sda1 was the boot partition, so I dd'd it to the boot partion on the other drive: sdb1, as instructed by the article to which I referred.
When I built the system, via YaST & partitioner, as instructed via SuSE install, I set sda2 and sdb2 as LinuxRAID, then selected RAID to make them raid'd them, then gave it to LVM.
SWAP is the 3rd partition on both drives.
It's quite possible I'm misreading the install instructions, but that's where I am and how I got there.
syg00: I press <.F2> during the POST to allow me to boot from either physical disk ONE or TWO; choosing TWO tells me there is nothing from which to boot.
Later I may disconnect power from ONE later to test as you suggest, but I suspect the results will be the same.
A separate partition for /boot is not required if you install the boot loader in the MBR. If installing the boot loader in the MBR is not an option, /boot needs to reside on a separate partition
I don't know how to, or not to, or if I can, do this.
I'm just following the instructions during the install.
And this other thing:
Quote:
For UEFI machines, you need to set up a dedicated /boot/efi partition. It needs to be VFAT-formated and may reside on the RAID-1 device to prevent booting problems in case the physical disk with /boot/efi fails
Clearly, I've missed something and done something I shouldn't have, didn't do something I should have.
Location: North of Boston, Mass (North Shore/Cape Ann)
Distribution: CentOS 7.0 (and kvm/qemu)
Posts: 91
Original Poster
Rep:
Solved.
Many things.
One, in looking on the screen, instead of my print-out of it, I see more clearly I was to make three (logical) RAID disks, one for BootEFI, one for SWAP, and one for the data.
(Once you know it) you can see it said that in the instruction page I linked last post.
Second confusion was Partitioning, after sizing, assigns function, formatting and mount point.
If, after size, you declare function of Linux RAID, the rest goes away.
Later, you amalgamate them into a RAID. That's it, nothing indicates you've now got to modify the RAID you just made to assign the formatting and mount point.
System now built and boots off either drive (when I <.F2> then choose which disk to boot).
That's good enough for me for now. I'll disconnect the disks for that test later (as well as test the madam command for anything it will tell me).
Final question, for me and not the group, comes from I have a choice, LVM the RAID'd device -- OR -- I can keep them separate (un-RAID'd), give them both to LVM and let it do the RAIDing for you.
That's what I did a decade or so ago with my CentOS/7 set-up, which also helped confuse me, was I to RAID then LVM, or LVM then RAID?
So I'm experimenting with that now. There seems to be a size restriction of RAID then LVM; but perhaps I'm (again) misunderstanding things.
Thanks to all for your advice and counsel to date.
I'll mark this closed/solved.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.