LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 01-19-2014, 08:53 PM   #1
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
Memory Leak Installing Slackware 14.1 32-bit to Device Mapper partition?


I'm attempting to install Slackware 14.1 32-bit on a system with fake RAID. I had Slackware 14.0 installed on the system with no problem.

When I attempt to install Slackware 14.1 everything goes OK until setup installs the packages. As the packages are being installed I can see the allocated virtual memory increase until all of my 2 GB of physical memory is in use. At that point the message "Killed" is printed on the console terminal. The error happens at a different point every time I attempt to install Slackware.

While monitoring processes with "top" I do not see any process allocating excessive memory. It appears that the memory is being allocated as a result of writing files to the Device Mapper partition.

Another odd thing is I do not see the hard disk access LED blinking to indicate disk access. It almost seems as if the data is never being written to the hard disk.

Is there a known problem with the Device Mapper or Promise FastTrack 374 SATA driver in Slackware 14.1 32-bit? It looks like there might be a memory leak problem.

Here is a little bit more about how I am installing Slackware. I am using the "dmraid" program which is essentially a configuration utility that creates Device Mapper devices matching partitions in a fake RAID array. After the Device Mapper is configured, "dmraid" is out of the picture. The actual striping or mirroring is done by the standard Linux Device Mapper.

I built "dmraid" and the associated libraries under Slackware 14.1 32-bit on another computer. When I run "dmraid" it correctly detects my partitions and creates the devices. I can mount and access the files in the partitions.

After booting the Slackware installation disc, I have to do a number of things to install to a "dmraid" partition.
  1. Copy dmraid program to /sbin in RAM disk
  2. Copy dmraid libraries to /lib in RAM disk
  3. Use "dmraid -ay" to detect partitions
  4. Edit /usr/lib/setup/setup to replace "probe -l" with "fdisk -l" so that partitions can be detected
  5. Install Slackware to the correct mapper device. EX: /dev/mapper/pdc_cbbajbheep2
  6. Create initrd with modified "init" script to run "dmraid"
  7. Configure boot loader to use UUID for root

I am having problems with step 5.

This does not appear to be a problem with "dmraid". It hasn't changed for a long time. I generally do not have to re-build it even to support newer versions of Linux. I get no errors when building "dmraid" on Slackware 14.1, and I get no errors when using "dmraid" to configure the Device Mapper. So far it looks like a memory leak problem with the Device Mapper, Promise disk controller driver or something else.

Here are some hardware details.
  • Pentium 4 3.2 Ghz. Single Core with Hyper Threading
  • 2 GB RAM
  • Asus P4C800E Deluxe Motherboard
  • Promise FastTrack 374 (fake hardware) RAID
  • One RAID 0 array using two SATA disks (Linux boot partition)
  • One RAID 0 array using two IDE disks (no operating system)

The Promise RAID controller uses a normal SATA and IDE disk controller with a BIOS ROM for booting. I am not using the Promise proprietary RAID driver since it has not been updated for kernels later than 2.4. I am using the normal (non-RAID) SATA driver for the Promise disk controller that comes with Linux. The Device Mapper takes care of the striping instead of the proprietary driver. This has worked in the past with no problems.
 
Old 01-19-2014, 09:29 PM   #2
Ser Olmy
Senior Member
 
Registered: Jan 2012
Distribution: Slackware
Posts: 3,348

Rep: Reputation: Disabled
Sounds like the installer is installing to the RAM disk until you run out of memory.

Linux support for FakeRAID is basically nonexistent. mdadm can access Intel RAID sets, but no other metadata types are, or is likely to ever be, supported.

dmraid can activate RAID 1 sets created by a wide range of controllers, but will choke on RAID 5 sets due to the current device mapper RAID456 module being incompatible. Some Linux distributions use custom kernels with a backported RAID45 module, but it's basically a real mess.

I actually have a number of Slackware systems booting from dmraid-driven fakeRAID RAID 1 sets. To get this to work, this is the procedure I followed:
  1. Create the RAID set using the controller firmware setup routing
  2. Boot from a System Rescue CD (which contains dmraid and create partitions
  3. Power down the system and unplug one drive
  4. Boot from the Slackware install DVD and install to the existing partitions on /dev/sda
  5. Replace all partition references in /etc/fstab with UUIDs or labels
  6. Download and compile dmraid
  7. Create an initrd with dmraid (the init script must be modified to include the /sbin/dmraid -ay command)
  8. Create two entries in lilo.conf; one with an initrd and one that boots directly to /dev/sdan
  9. Power down, plug the 2nd drive back in and resync the RAID array
Steps 1-4 ensures that partitions are created within the limits imposed by the fakeRAID controller. A RAID 1 set is just two drives with some RAID metadata at the end, so once the set and the partitions have been created, either drive can be used as if it was a single non-RAID drive.

The result is a system that boots off a BIOS-supported fakeRAID array, but can never be directly upgraded to a more recent kernel as lilo throws a fit when it sees the device mapper root device. The only way to update the lilo boot sector is to temporarily boot to the non-dmraid entry, run lilo, reboot and resync the RAID.

Alternatively, unplugging a drive will also do the trick, as lilo seems perfectly happy to write to a degraded dmraid set.

All this to ensure the system will still boot from a RAID 1 fakeRAID array with a failed drive. And of course, none of this will work with a RAID 0 set as both drives are needed at all times, or a RAID5/6 set as dmraid won't be able to assemble the array due to the incompatible kernel module.

Edit: To figure out what the Slackware installer is actually doing, switch to another console and type mount. If all is well, the RAID device should be mounted to the /mnt directory. I'm willing to bet it isn't, which means you're installing to the RAM disk.

Last edited by Ser Olmy; 01-19-2014 at 09:38 PM.
 
1 members found this post helpful.
Old 01-19-2014, 11:24 PM   #3
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Original Poster
Rep: Reputation: 258Reputation: 258Reputation: 258
Quote:
Originally Posted by Ser Olmy View Post
Sounds like the installer is installing to the RAM disk until you run out of memory.
That's exactly what was happening. I found a work around but it doesn't make a lot of sense.

I found that my device was shown as mounted on "/mnt" but it was still apparently writing to the RAM disk. I suspect that setup may have somehow created a loop device on the RAM disk. When I rebooted and mounted my RAID partition, it had not been re-formatted. The "setup" script was obviously formatting and mounting something else.

The work around was to format the partition manually with "mkfs.ext4" and then tell "setup" to not format the partition. For some reason formatting the partition from "setup" causes the wrong device to be formatted and mounted. I had forgotten that I did the formatting manually the last time I installed Slackware. So this is the first time that I've attempted to format the partition using "setup".

Thank you for your suggestions. I've been using a modified "init" script that just includes a "dmraid -ay" command. I also wrote some UDEV rules to create short device names like "/dev/sdr2" to make things easier. The UDEV rules were tricky because "dmraid" creates separate device nodes for the names under "/dev/mapper" instead of linking them to the real device names. I had to compare the block IDs of the "/dev/dm-nn" versus the "/dev/mapper" names to tell when there was a match.

I use GRUB 0.97 for booting and I install GRUB from a native GRUB shell booted from a floppy or CD. That avoids the problems with trying to install the boot loader from Linux. The GRUB native shell uses the BIOS to read and write the boot sectors and partition table. The side benefit is that I don't have to reinstall the boot loader every time that I change the menu or the kernel files.

I think that my problem is solved. If I have other problems I'll open a new thread.

Last edited by Erik_FL; 01-19-2014 at 11:29 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] device-mapper: remove ioctl on failed: Device or resource busy jheengut Linux - Kernel 1 10-23-2013 09:38 AM
Why I cann't find the device created through device mapper in /proc/partitions yuanbor Linux - Software 2 05-16-2011 02:32 AM
Monitoring memory leak in device driver jetsui Linux - Kernel 0 03-09-2007 07:48 PM
installing device-mapper aaronj Linux - Software 1 10-02-2004 03:00 AM
Slackware Memory leak? slackMeUp Slackware 5 12-09-2003 03:04 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 02:52 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration