-   Slackware (
-   -   Using raw partitions with VirtualBox (

Woodsman 09-18-2012 05:09 PM

Using raw partitions with VirtualBox
I'm preparing a migration check list to move from Slackware 13.1 32-bit to Slackware 14.0 64-bit. Lots to do. :)

I appreciate advice from others who have experience with using raw devices in VirtualBox.

Currently I have build environments on separate partitions on a spare hot-swappable SATA drive. Each build environment uses two partitions, / and /var (yes, I could merge those into one partition). As I have only the one machine for both daily usage and software compiling, I need to reboot at the end of the day to the desired build environment and run my builds at night.

I'm planning to update my primary system with a larger drive. I want to move the build environments to the new drive to eliminate needing the spare drive. Then I want to have access to the build environments through a virtual machine to significantly reduce reboots and night-time only builds. I do not want to use virtual hard disks because virtual disks are slower than real disks.

After reading the VirtualBox user manual and surfing the web for details, I think the challenge resolves to these steps:

1. Transfer the build environment partitions from the spare drive to the new drive (update fstab, etc.)

2. In each build environment install lilo to the partition boot record (for example, lilo -b /dev/sdb6 -C /etc/lilo.conf).

3. Chainload the master boot loader to each build environment boot loader.

4. Verify each build environment system boots on the new drive.

5. In my primary desktop system, create a raw disk image using the specific partitions of the build environment. For example:

vboxmanage internalcommands createrawvmdk -filename /home/public/vm-images/Slackware64-14.0.vmdk -rawdisk /dev/sdb -partitions 5,6 -relative -register -mbr /dev/sdb6

6. Create the virtual machines and use the respective build environment vmdk disk.

7. In each virtual machine create a shared folder (all of my sources and build directories are in /home/public/builds/).

8. To support running in the virtual machine or rebooting to the actual partitions, I probably need a special shell script to mount the build partition directly or to the shared folder.

9. Assign 3 GB to each VM (I have 8 GB installed).

10. Edit the build environment fstab tmpfs usage to use 2.5 GB. 2.5 GB supports all of the packages I build. (The build environments need very little running thus most RAM can be used by tmpfs. Building in tmpfs is much faster than on a disk partition.)

I understand there is danger with using common partitions between the host and guest. Does that include shared folders or only those partitions mounted directly in the host and guest? Would using NFS be better from within the virtual machine?

Comments? Ideas?

Thanks! :)

jefro 09-18-2012 07:48 PM

Not sure how true that is by the way. "I do not want to use virtual hard disks because virtual disks are slower than real disks."
I would like to see some documentation for this.
Personally I'd just use virtual disks. I think there is an issue where if you pre-allocate the space you don't suffer the grow loss on a vm if that is what you mean by slow access.

Many people use partitions and even drives in VM's. A lot of web how to's exist and all the vm's pretty much say it is not a good way to go but if you want then fine. Just be sure you have not mounted them on the host is the general safeguard.

You can also go with nfs but this isn't always as easy on build systems. For example I would make a blank virtual hard drive and boot to a live cd then move data over to the blank drive so that all the build is in relation to sda1. This may be just my way however.

Shared folders are a different deal. They are managed by the vm and there is only a slight security risk but usually not a data issue. It works usually like a networked drive on some vm's.

ReaperX7 09-18-2012 09:04 PM

If you use LsiLogic or BusLogic SCSI for your Hard Drive controller it will speed up read/write speeds a bit more than IDE and/or SATA. Not sure if SAS would work but SCSI works with just about all operating systems anymore.

All times are GMT -5. The time now is 07:52 AM.