LinuxQuestions.org
Did you know LQ has a Linux Hardware Compatibility List?
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Ubuntu
User Name
Password
Ubuntu This forum is for the discussion of Ubuntu Linux.

Notices



Reply
 
Search this Thread
Old 06-22-2014, 02:55 PM   #1
The00Dustin
Member
 
Registered: Jan 2006
Posts: 68

Rep: Reputation: 15
DMRAID Issue + TrueCrypt Issue - May Not Be Ubuntu Specific


I'm not new to linux, but I was trained in the ways of RedHat (actually CentOS/Fedora, but close enough), so Ubuntu/debian is still a bit foreign to me. I'm also a bit out of practice, as I mostly only use Linux to manage Windows disks (via PartEd Magic mostly, but using ntfsutils and DD, not the GUI).

In order to get as far as I am, I had to learn how FakeRAID isn't ideal in linux, but I am running Windows on the same drives, and don't want to virtualize it, so I stuck with FakeRAID in spite of its problems. I would actually use real RAID, but my real RAID controller is SATA II and I am using some SATA III SSDs in this configuration. That having been said, I had a heck of a time installing Ubuntu 14.04 on a LUKS partition on a FakeRAID volume. I finally got that done, and now I'm having trouble using TrueCrypt to open Windows partitions on another FakeRAID volume, which appears to potentially be related to the same DMRAID behavior.

First and foremost, let me explain my actual RAID configuration, because this could be something DMRAID simply doesn't fully support. I have four SSDs set up in a RAID10 (which is, unfortunately, RAID0+1 on this particular chipset). More specifically, I have them set up in TWO RAID10 arrays, which I will refer to as RAID10SSDP1 and RAID10SSDP2. My original problem was that DMRAID doesn't mount the second array automatically, so installing Ubuntu there seemed unlikely to work even if everything else went perfect (it didn't, the machine doesn't support UEFI [8-core FX processor, but no UEFI BIOS, seems odd to me], and the ACPI implementation is bugged, so it took quite a while to even get the Ubuntu CD to boot on this machine).

In hindsight, given that I'm not getting RAID1+0 out of this, the perfect answer to my problem is probably to redo everything and end with two RAID1 arrays (one FakeRAID, one MD RAID or LVM) instead of a RAID0+1 array that doesn't really add any additional fault tolerance. However, I don't really want to do that since I have both operating systems installed and configured to some degree and I don't have a lot of free time to mess with all of this (backing up Windows again and restoring it to RAID1 plus re-installing Ubuntu using something other than DMRAID and losing what I've already set-up therein). Moreover, I wouldn't mind helping to get a bug fixed in DMRAID if one exists.

I already backed up the boot sector and partitions from RAID10SSDP1 using Ubuntu Desktop 14.04 in LiveCD mode so that I could delete and re-create the RAID10 arrays after secure-erasing the SSDs to make sure I don't lose any performance, and I re-created RAID10SSDP1 at the size RAID10SSDP2 was (and vice versa). I also already restored the backed up RAID10SSDP1 data to RAID10SSDP2. Windows boots fine and doesn't even know anything is different. Ubuntu is installed on RAID10SSDP1 since that solves the DMRAID auto-detect problem. If I didn't have to disable ACPI and make a minor mistake in grub.conf because of that, installing Ubuntu on LUKS would have been the easy part.

All of that having been said, I'm going to delve into the details of the DMRAID issue, but first, I want to define what I am referring to:
1) /dev/mapper/pdc_dd is the beginning of the name of the DMRAID auto-activated array, but since I don't know the whole name, I am going to substitute pdc_dd... with pdc_RAID10SSDP1
2) /dev/mapper/pdc_de is the beginning of the name of the other DMRAID array, which isn't activated during boot due to detected conflicts or something. Again, since I don't know the whole name, I am going to substitute pdc_de... with pdc_RAID10SSDP2

So, when Ubuntu 14.04 boots (LiveCD or installed copy), DMRAID messages pop up regarding the disabling of partition tables on a few dm-#'s since they are components of another dm-# (which is pdc_RAID10SSDP1). Additional DMRAID messages pop up regarding not activating at least one more dm-# (and maybe additional dm-#s) due to a conflict or something. I'm not in Linux and want to get this posted while I have time, so I don't have access to logs at the moment, but I will be glad to post more specific information from logs later if someone can confirm that this behavior isn't expected based on my configuration). Once I'm in Ubuntu, dmraid -s will show the active superset (pdc_RAID10SSDP1) and the inactive superset (pdc_RAID10SSDP2). dmraid -ay will activate pdc_RAID10SSDP2 with output something like this:
pdc_RAID10SSDP1 already activated
pdc_RAID10SSDP2-0 successfully activated
pdc_RAID10SSDP2-1 successfully activated
pdc_RAID10SSDP2 successfully activated

Prior to activation, /dev/mapper has the following:
pdc_RAID10SSDP1-0 - this is RAID0 component
pdc_RAID10SSDP1-1 - this is RAID0 component
pdc_RAID10SSDP1 - this is RAID0+1
pdc_RAID10SSDP11 - this is /boot
pdc_RAID10SSDP12 - this is encrypted LUKS container (root)

After activation, the following are added:
pdc_RAID10SSDP2-0 - this is RAID0 component
pdc_RAID10SSDP2-1 - this is RAID0 component
pdc_RAID10SSDP2 - this is RAID0+1
pdc_RAID10SSDP21 - this is Windows boot TC container
pdc_RAID10SSDP22 - this is C:\ TC container

I'm pretty sure all of this is accurate, and I'm guessing it all sounds normal enough other than the part about the messages about not activating some dm-# devs during boot. That having been said, I am now going to describe some behavior I saw in the Ubuntu 14.04 Live CD before installing. Specifically, I ran dmraid -an to deactivate pdc_RAID10SSDP1 and then ran dmraid -ay pdc_RAID10SSDP2 in order to activate pdc_RAID10SSDP2. DMRAID behaved as if pdc_RAID10SSDP2 was dependent on pdc_RAID10SSDP1 and re-activated it as well. This is one bit of behavior I'm not sure is normal (and I am open to the possibility that it is normal, or even a bug in my BIOS and not in DMRAID).

So, that's the DMRAID issue, and now I am going to post about the TrueCrypt issue (in the same thread because the configuration information above is relevant). That issue is simpler to explain. Basically, when I try to mount /dev/mapper/pdc_RAID10SSDP22 using TrueCrypt, I get an error that states something like /sys/block/pdc_RAID10SSDP2/pdc_RAID10SSDP22/start doesn't exist, and when I look in /sys/block, I don't see pdc_anything, only the dm-#'s. I can see I the disk application (because only the RAID0+1 dm-#s show partitions there) and in something else (I don't remember what) which dm-# I need to use in order to connect to pdc_RAID10SSDP2, but the dm-#'s don't show up with partitions so I can't mount a specific partition using /dev/dm-# from TrueCrypt. Based on this thread: http://www.linuxquestions.org/questi...ubuntu-775564/ I'm pretty sure (but not positive since the steps to resolution aren't actually posted) that I should be able to mount using the /dev/mapper path that I tried.

Sorry for the super long post. Any input on either issue? TIA

Last edited by The00Dustin; 06-22-2014 at 02:56 PM. Reason: added line breaks to separate two comments from each other and a third
 
Old 06-22-2014, 02:59 PM   #2
The00Dustin
Member
 
Registered: Jan 2006
Posts: 68

Original Poster
Rep: Reputation: 15
;tldr on DMRAID

OK, you probably have to read the post above for this to make sense, but in order to make sure it is clear, the two things that I would like to find out whether or not are normal follow:
1) DMRAID doesn't auto-activate second array on same set of disks
2) DMRAID can't activate second array on same set of disks without activating first array
 
Old 07-01-2014, 12:43 AM   #3
notsure
Member
 
Registered: Jun 2012
Location: Detroit
Distribution: Arch x86_64
Posts: 109

Rep: Reputation: 9
I doubt anyone read that long post.
I did, but I've never used DMRAID and I don't care for Ubuntu nor Windows.

However, maybe you can clarify how you have 4 SSDs in TWO RAID10 arrays (I gather you didn't make a typo). Furthermore, you state the motherboard is setting up 2 striped arrays to be mirrored (RAID0+1).
 
Old 07-01-2014, 06:41 AM   #4
The00Dustin
Member
 
Registered: Jan 2006
Posts: 68

Original Poster
Rep: Reputation: 15
Thanks for reading the long post, I added the TLDR post afterwards because I was concerned that people might not read the initial post, and skimming + TLDR might be enough to tell them whether or not they can potentially help. You are correct that I did not make a typo. Since you appear to at least understand RAID (even if your experience with DMRAID is just slightly below mine), and believe clarification could be beneficial, here goes:

When you create a RAID array, it may take the full size of the smallest disk (or all space when all disks are the same size) by default, but there is usually an option to set the size. In this particular configuration, that option exists. I am actually using 4 240GB SSDs, and I intended to make 2 200GB RAID10 arrays. I go with two arrays because it segregates the boot devices so I can use BIOS boot options. I wanted to make them 200GB because this guarantees a decent to excessive (logical) portion of each disk will remain unwritten, which allows for faster performance between garbage collection / wear leveling cycles (because there is no TRIM in RAID). Unfortunately, the second array cannot have a size set in my configuration, so the second array ended up being 280GB, but LVM would have ultimately provided the same guarantee in the sense that 80GB (40 on each disk) of (logical) space would never be written. The discussion about the Ubuntu issue and backing up and restoring all data was related to the fact that DMRAID doesn't automatically activate the second array. I had to use DD to back up all Windows data (boot sectors, partition table, partitions), delete the RAID arrays, secure erase the disks, and recreate RAID arrays with the oversized array first in order to use LVM to leave the untouched space on the Ubuntu array while still allowing full-disk TrueCrypt functionality to remain completely intact on the Windows array.

You are also correct that the motherboard is setting up striped arrays to be mirrored (while I would prefer mirrored arrays with stripes across them, beggars can't be choosers).
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Grep issue - specific word from line davie_93 Linux - General 3 10-17-2013 10:08 AM
[SOLVED] Weird DNS issue with specific domain gunhed Linux - Networking 13 07-22-2013 10:49 AM
Joystick and USB issue, randomly connects, also caused a mouse issue. tim.sloane Linux - Hardware 1 08-08-2012 10:05 AM
Trouble mounting ipod. Hal issue and mount point issue. okos Slackware 2 05-11-2009 12:51 AM
Creating Users in Linux with specific rights: Issue?? rajnair0278 Linux - Software 1 07-10-2006 08:24 AM


All times are GMT -5. The time now is 04:09 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration