SUSE / openSUSEThis Forum is for the discussion of Suse Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
While performing some pre-production testing of a openSUSE 13.2 install I discovered a problem when booting from a degraded mdadm raid-1 partition. If the 2nd disk, /dev/sdb containing raid partition /dev/sdb2, is physically inoperative or disconnected, the system will not boot and I end up in the Dracut Emergency Shell. At that time, mdadm shows this partition, /dev/md0, to be operational albeit degraded, but nothing that would prevent booting.
Thinking that it still had something to do with the degraded array, I configured it to a single disk raid1 where /dev/md0 only contained /dev/sda2 (after failing out and removing /dev/sdb2 I did: mdadm /dev/md0 --grow --raid-device=1 --force). When the secondary disk was removed, it would not boot and exhibited the same behavior as above. If the second disk was once again physically connected it would boot without incident.
This has never been a problem in previous versions. Any Ideas?
Here is my configuration:
fdisk -l
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000459a6
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 8390655 8388608 4G 82 Linux swap / Solaris
/dev/sda2 * 8390656 186648575 178257920 85G fd Linux raid autodetect
Disk /dev/sdb: 93.2 GiB, 100030242816 bytes, 195371568 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x528b2130
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 8390655 8388608 4G 82 Linux swap / Solaris
/dev/sdb2 * 8390656 186648575 178257920 85G fd Linux raid autodetect
mdadm --detail /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Fri May 15 11:32:28 2015
Raid Level : raid1
Array Size : 89128832 (85.00 GiB 91.27 GB)
Used Dev Size : 89128832 (85.00 GiB 91.27 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed May 27 19:39:44 2015
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : any:0
UUID : f191ca0c:b31d6d89:41232679:5e77bec6
Events : 2465
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
2 8 18 1 active sync /dev/sdb2
Sounds like your system doesn't know how to boot off of sda. Is grub configured correctly to boot of either disk?
Are you presented with a grub prompt?
* One of my early thoughts was that perhaps it had something to do with the boot loader. 13.2 uses grub2 by default and doesn't offer grub (I guess they're calling it grub legacy now) as an install option. I'm more familiar with the configuration of the original so I installed it without incident and, after removing /dev/sdb, it exhibited the same behavior as above.
* At the emergency shell provided, it gives the option to view the boot log. Some interesting highlights:
o kernal: md0: is active with 1 out of 2 mirrors.
o kernal: md0: detected capacity change from 0 to 91267923968
o kernal: md0: unknown partition table
o systemd[1]: Found device /dev/md0
o dracut-initque[284]: Warning: Could not boot
o dracut-initque[284]: Warning: /dev/disk/by_UUID/ea3 .. does not exist
* Interestingly, the /dev/disk/by_UUID listed above equates to /dev/sdb1 when /dev/sdb is attached. In this case it isn't. I thought that perhaps the resume= entry in the grub menu was the cause -- but no I had removed it.
Found this on the web. Give it a try it might fix your issues.
Code:
1) Find the the stage 1 file:
grub> find /boot/grub/stage1
(hd0,0)
(hd1,0)
grub>
The output could be different, depending on the partition where /boot is located.
2) Asumming your disks are /dev/sda (hd0) and /dev/sdb (hd1) and you have grub installed in the MBR of /dev/sda, do the following to install grub into /dev/sdb MBR:
> device (hd0) /dev/sdb
> root (hd0,0)
> setup (hd0)
That is telling grub to assume the drive is hd0 (the first disk in the system).
Thus, if the first fails, the second will play the role of the first one, and so the MBR will be correct.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.