I make this post with great regret. Slackware has always been my number 1 favorite distrobution hands down. All of my nix servers in my home environment run Slack. I even run some systems at my work with Slack (the ones I have the liberty to choose my OS). I know there is a majority of slackers here that feel the same.
So, I'm sure you guys will come to understand my astonishment and frustration when I came across a situation when my beloved distro failed to surmount the situation I came across.
There is a server at my work (Intel-Xeon Supermicro), that has centos 5.5 64 installed. There was some corruption in the rc.sysinit file that caused the system to boot in read-only mode. Although I knew it would probably be a good idea to boot into the centos recovery environment, I decided to pull out my slack disk instead. Besides, all I had to do was activate the underlying VG, mount it, and edit the file. How hard could this be?
Apparently, much more difficult than it looked. First of all, Intel's Matrix Storage Manager ROM (imsm), was loaded on the system. There was a Raid1 via the imsm container, and a LVM2 VG under that. So, when I boot into the system via Slack13.37, via the huge kernel, I thought this would be easy as pie.
shows the appropriate raid1 between /dev/sda1 and /dev/sda2
An assemble scan wasn't neccessary, although just for kicks I stopped the raid1 and restarted it:
mdadm --stop /dev/md126
mdadm --assemble --scan
which restarted the array with the appropriate imsm container. Mdadm could even detect the metadata correctly, when I ran:
I could clearly see I was on Intel Matrix Storage Manger ver 22.214.171.1242.
So, after running:
vgchange VolGroup00 -a y
and activating VolGroup00, all I left had to do was mount the appropriate LVM.
mount /dev/mapper/VolGroup00-LogVol00 /mnt/0
which, unfortunately, would fail misererably with a segmentation fault at md.c:6/
You may ask, why I didn't just attempt to mount the lvm after stopping the raid array? Well, I thought the same thing. This is a raid1 after all, right? the data on /dev/sda1 is the same on /dev/sdb1. Well, after stopping /dev/md126, and running:
i'm sure you guys would understand my surprise when both commands responded with no pvs / no vgs on the system. After poking around in the Intel Matrix Storage Manager ROM, I discovered that encryption was enabled on the R1 array. Stopping the Raid may break the filesystem, or the LV may only be transparent when the array is intact.
I was able to fix my conundrum, although I had to swallow my slack pride and load centos5.8_64, and boot linux rescue and allow rescue to auto mount the lvm.
My only assumption, is that there is an incompatibility issue with with mdadm between Slack13.37 and centos 5.5.
Slack 13.37 ships with mdadm 3.1.5, which is pretty damn new (2011 release), considering that mdadm's currently on version 3.2. This version also supports metadata format IMSM.
It looks like centos5.5 doesn't even attempt to assemble to IMSM Raid1 volume. Centos5.5 ships with mdadm v2.6.9, which I don't belive had support for IMSM metadata format. Furthermore, after booting into the fixed CentOS5.5 install on the Intel Supermicro, I discovered centos 5.5 doesn't even touch the IMSM raid1 volume! There is no raid volume according to /proc/mdstat!
So, I can't fault slack here. I know Patrick did the smart thing bundling mdadm3.1.5 with 13.37. Also, there could some kernel issues that I'm overlooking. I know redhat/centos follows the model of, going with a longterm-stable kernel (2.6.18.x), and takes all the updates and backports them to the old kernels. I'd have to scour through the centos/redhat logs, but maybe they got an update from intel which allows them to read IMSM encrypted filesystems. Or, it may just be that mdadm v2.6.9 can't see IMSM at all, which is what allows centos to see the lvm.
Anyways, this whole rant is from one sysadmin to another, as a forewarning, always make sure you have plenty of ISO's at your disposal. And don't be afraid to use another distro, even when it isn't your favorite. In the corporate world, I've yet to come across a client that purposely requests slack on thier production/dev environment. Although, you better believe the sysadmins who know there guns, will be running it the background, and you won't even know it