LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel
User Name
Password
Linux - Kernel This forum is for all discussion relating to the Linux kernel.

Notices


Reply
  Search this Thread
Old 12-16-2009, 01:43 PM   #1
Openumerix
LQ Newbie
 
Registered: Nov 2009
Location: Canada
Distribution: Own derived from LFS
Posts: 14

Rep: Reputation: 0
IMSM "volumes" in mdadm RAID setup


Hello All,

I believe I read all there is on the net about software/firmware RAID, but I could not find a clear relationship between the "volumes" set up by the Intel Matrix Storage Manager Option ROM (v8.0.9 on a Tyan S7025 motherboard) and the devices the mdadm handles.

What I try to achieve is to use the option ROM to configure two SATA devices to act in a part RAID0 part RAID1 setting, keeping all the OS and root FS on the first array, and my user space on the latter array (kernel 2.6.31, mdadm 3.0.3).

After setting the "volumes" up by using the option ROM, my /dev/sda and /dev/sdb disks contain what the next two listings show:

Quote:
/dev/sda:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.00
Orig Family : 205cceeb
Family : 205cceeb
Generation : 00000015
UUID : 5358c13c:ce10a94d:3bea9997:56eea836
Checksum : 77f65e69 correct
MPB Sectors : 2
Disks : 2
RAID Devices : 2

Disk00 Serial : 9VS2AA7M
State : active
Id : 00000000
Usable Size : 2930272654 (1397.26 GiB 1500.30 GB)

[Volume0]:
UUID : 5059b5db:5e0523eb:68953f67:d2c3fdb6
RAID Level : 0
Members : 2
This Slot : 0
Array Size : 838860800 (400.00 GiB 429.50 GB)
Per Dev Size : 419430664 (200.00 GiB 214.75 GB)
Sector Offset : 0
Num Stripes : 1638400
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean

[Volume1]:
UUID : 78bbe31f:5e63811b:1eed23e5:3f6eb516
RAID Level : 1
Members : 2
This Slot : 0
Array Size : 2510835712 (1197.26 GiB 1285.55 GB)
Per Dev Size : 2510835976 (1197.26 GiB 1285.55 GB)
Sector Offset : 419434760
Num Stripes : 9807952
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : uninitialized
Dirty State : clean

Disk01 Serial : 9VS29VK5
State : active
Id : 00010000
Usable Size : 2930272654 (1397.26 GiB 1500.30 GB)

Quote:
/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.00
Orig Family : 205cceeb
Family : 205cceeb
Generation : 00000015
UUID : 5358c13c:ce10a94d:3bea9997:56eea836
Checksum : 77f65e69 correct
MPB Sectors : 2
Disks : 2
RAID Devices : 2

Disk01 Serial : 9VS29VK5
State : active
Id : 00010000
Usable Size : 2930272654 (1397.26 GiB 1500.30 GB)

[Volume0]:
UUID : 5059b5db:5e0523eb:68953f67:d2c3fdb6
RAID Level : 0
Members : 2
This Slot : 1
Array Size : 838860800 (400.00 GiB 429.50 GB)
Per Dev Size : 419430664 (200.00 GiB 214.75 GB)
Sector Offset : 0
Num Stripes : 1638400
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean

[Volume1]:
UUID : 78bbe31f:5e63811b:1eed23e5:3f6eb516
RAID Level : 1
Members : 2
This Slot : 1
Array Size : 2510835712 (1197.26 GiB 1285.55 GB)
Per Dev Size : 2510835976 (1197.26 GiB 1285.55 GB)
Sector Offset : 419434760
Num Stripes : 9807952
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : uninitialized
Dirty State : clean

Disk00 Serial : 9VS2AA7M
State : active
Id : 00000000
Usable Size : 2930272654 (1397.26 GiB 1500.30 GB)
I created a temporary mdadm configuration file as in the following listing (1):

Quote:
# step 1:
# echo "DEVICES /dev/sd[ab]" > mdconf.tmp
DEVICES /dev/sd[ab]
# step 2:
# mdadm -Ebsc mdconf.tmp >> mdconf.tmp
ARRAY metadata=imsm UUID=5358c13c:ce10a94d:3bea9997:56eea836
ARRAY /dev/md/Volume0 container=5358c13c:ce10a94d:3bea9997:56eea836 member=0 UUID=5059b5db:5e0523eb:68953f67:d2c3fdb6
ARRAY /dev/md/Volume1 container=5358c13c:ce10a94d:3bea9997:56eea836 member=1 UUID=78bbe31f:5e63811b:1eed23e5:3f6eb516
By running

Code:
> mdadm -Ascv mdconfig.tmp
I get

Quote:
mdadm: looking for devices for further assembly
md: md127 stopped.
mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
md: bind<sda>
mdadm: added /dev/sda to /dev/md/imsm0 as -1
md: bind<sdb>
mdadm: added /dev/sdb to /dev/md/imsm0 as -1
mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
mdadm: looking for devices for /dev/md/Volume0
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.
mdadm: looking for devices for /dev/md/Volume1
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.
Practically, it creates a container of the two /dev/sd[ab] devices, but none of the two volumes that I am really after. The output of

Code:
> mdadm -D /dev/md127
is

Quote:
/dev/md127:
Version : imsm
Raid Level : container
Total Devices : 2

Working Devices : 2


UUID : 5358c13c:ce10a94d:3bea9997:56eea836
Member Arrays :

Number Major Minor RaidDevice

0 8 0 - /dev/sda
1 8 16 - /dev/sdb
whereas

Code:
> mdadm -E /dev/md127
says

Quote:
/dev/md127:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.00
Orig Family : 205cceeb
Family : 205cceeb
Generation : 00000015
UUID : 5358c13c:ce10a94d:3bea9997:56eea836
Checksum : 77f65e69 correct
MPB Sectors : 2
Disks : 2
RAID Devices : 2

Disk00 Serial : 9VS2AA7M
State : active
Id : 00000000
Usable Size : 2930272654 (1397.26 GiB 1500.30 GB)

[Volume0]:
UUID : 5059b5db:5e0523eb:68953f67:d2c3fdb6
RAID Level : 0
Members : 2
This Slot : 0
Array Size : 838860800 (400.00 GiB 429.50 GB)
Per Dev Size : 419430664 (200.00 GiB 214.75 GB)
Sector Offset : 0
Num Stripes : 1638400
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean

[Volume1]:
UUID : 78bbe31f:5e63811b:1eed23e5:3f6eb516
RAID Level : 1
Members : 2
This Slot : 0
Array Size : 2510835712 (1197.26 GiB 1285.55 GB)
Per Dev Size : 2510835976 (1197.26 GiB 1285.55 GB)
Sector Offset : 419434760
Num Stripes : 9807952
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : uninitialized
Dirty State : clean

Disk01 Serial : 9VS29VK5
State : active
Id : 00010000
Usable Size : 2930272654 (1397.26 GiB 1500.30 GB)
I believe this is what is supposed to happen, but I am not sure. In my understanding, "partitioning" by using the option ROM would allow me to boot very easily from the first raid array, that I could also further partition using Linux tools (fdisk or cfdisk?) As such, I would not need to partition both drives before creating the RAID arrays... Nevertheless, I am puzzled by the "volume" as a device: mdadm mentions only real disk devices and partitions.

Can mdadm work with the Intel matrix volumes, and if it does, is that supposed to offer a better performance than "pure" software RAID set up of individual partitions? How do I assemble such a device? Or do I need to delete the volumes, partition the drives, and build the container and the arrays within in the way documented pretty much everywhere? If so, is the ICH10R controller really helping out in boosting the performance of the arrays?

dmraid -ay does find and initialize the two "volumes", but I would like to use mdadm alone, or am I on the totally wrong track?

Your help is highly appreciated. Thanks ahead,
Tibor
 
Old 12-16-2009, 01:48 PM   #2
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
Quote:
dmraid -ay does find and initialize the two "volumes", but I would like to use mdadm alone, or am I on the totally wrong track?
dmraid is the tool to use for firmware raid, by insisting on mdadm, you are on the totally wrong track IMO. Is there something wrong with dmraid?
 
Old 12-16-2009, 02:05 PM   #3
Openumerix
LQ Newbie
 
Registered: Nov 2009
Location: Canada
Distribution: Own derived from LFS
Posts: 14

Original Poster
Rep: Reputation: 0
mdadm v > 3.0.3 does support IMSM directly, and it has a more "direct" connection with the kernel, at least the way I understand it. Furthermore, I keep bumping into statements as mdadm is going to be the "standard" in Linux for RAID device control/handling, so I believe it would be beneficial to go the mdadm way.

I am puzzled by the FW raid in general. If you are willing to help me clarify things on the subject, you could take a look at another, quite old and totally un-answered, post of mine.

Thanks for your interest and help so far.
 
Old 12-16-2009, 02:25 PM   #4
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
Sure, where's the old post?

My understanding, for what it's worth, which isn't much, is that once the raid has been established by dmraid, that the device manager in the kernel does all the work (it is software raid after all) so that there is essentially no difference in functionality between mdadm and dmraid - except for maintenance tools I suppose.

I haven't a clue about mdadm's handling of the situation, but the error messages it's throwing are not reassuring. Still, I suppose you could try partitioning/formatting the raid devices it shows, something like

fdisk /dev/md/imsm0 or fdisk /dev/md/Volume1

followed by the formatting of your choice assuming partitioning is successful.
 
Old 12-16-2009, 02:35 PM   #5
Openumerix
LQ Newbie
 
Registered: Nov 2009
Location: Canada
Distribution: Own derived from LFS
Posts: 14

Original Poster
Rep: Reputation: 0
Just click on Openumerix to search for all my posts. It is bears the subject line "mdadm v3.0.3 used as in pure software vs. fake RAID setup"

I believe you have just answered it to some extent.

Regarding the current subject, I do not even obtain a /dev/md/Volume[01] device, as I hope I should, but think I shouldn't . So there is nothing to partition. The only device I obtain is /dev/md127, which seems to be nothing but a container of the two actual disks, that is, /dev/sda and /dev/sdb. The superblocks of these contain the Volume0 and Volume1 "volumes" of RAID 0 and RAID 1 levels, respectively. Nevertheless, they do not get assembled by "mdadm -Asvc ..."

Thanks again.
 
Old 12-16-2009, 05:01 PM   #6
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
Found your old post: I remember seeing it; didn't answer it because I don't know of any good actual data/benchmarks to compare speeds. I suppose if you ever get it going without dmraid, then you can redo it with dmraid and run some benchmarks to compare, answering the question for good.

As for the current project, I've reached my maximum state of incompetence on the subject at this point; you might try the developers for mdadm and or dmraid and see if they can shed some light on the subject.

EDIT: actually I found this

http://osdir.com/ml/linux-raid/2009-06/msg00009.html

which seems helpful, looks like you have to use "-e imsm" with mdadm

Last edited by mostlyharmless; 12-16-2009 at 05:08 PM.
 
Old 12-17-2009, 09:22 AM   #7
Openumerix
LQ Newbie
 
Registered: Nov 2009
Location: Canada
Distribution: Own derived from LFS
Posts: 14

Original Poster
Rep: Reputation: 0
Yes, -e imsm sets the type of the superblock. That however, is already set by the metadata=imsm parameter that the mdadm -Ebsc returns, and I got captured in the mdconfig.tmp file.
Thanks again for your good will and time to help.
 
Old 12-18-2009, 11:05 AM   #8
Openumerix
LQ Newbie
 
Registered: Nov 2009
Location: Canada
Distribution: Own derived from LFS
Posts: 14

Original Poster
Rep: Reputation: 0
Got a step closer to the ultimate goal

OK, I got it. There is a tiny note in the "ANNONCEMENT 3.0.3" of mdadm, which a first seems to indicate an unacceptably simple and easy way to get to the finish line. As it turns out, it works like a charm. Here's what I did.

1) Created the Intel Matrix volumes using the option ROM (v8.9.0 for ICH10R/DO on a Tyan S7025 motherboard using 2 SATA hard disks).
2) Booted the system - I have my own "distribution" derived from LSF/CLFS on a USB flash memory key (kernel 2.6.31.6)
3) Issued
Code:
> mdadm -A /dev/md0 /dev/sd[ab]
which assembled the IMSM container of the real /dev/sd[ab] block devices
4) Issued
Code:
> mdadm -I /dev/md0
which built the actual raid devices inside the container. These are the RAID1 and RAID0 Intel Matrix volumes I was so much after (they got created under the name of /dev/md126 and /dev/md127)
5) Partitioned the two raid arrays, /dev/md126 and /dev/md127 using cfdisk.
6) Formatted the partitions.

That was it. What puzzles me is that the superblock type became v 0.90, and not "imsm" while that was there inherently in the superblock of the container. I'm not sure what this means, so I will do it again, using
Code:
> mdadm -A -e imsm /dev/md0 /dev/sd[ab]
and
Code:
> mdadm -I -e imsm /dev/md0
at steps 3) and 4).

My newest problem is that I could not get GRUB 1.97.1 see the bootable partition /dev/md126p1 that I created on /dev/md126. All ideas for making this "fake RAID" setup boot are welcome. Thanks.
 
Old 12-18-2009, 12:32 PM   #9
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
Grub 1.97 I know nothing about. Grub 0.97 sees dmraid partitions (can't remember if there was a patch needed or not), so it *should* see anything the bios sees...
 
  


Reply

Tags
kernel, linux, mdadm, raid



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LVM2 devfs mappings changed for physical volumes...LV "went away" scndmouse Linux - General 3 04-08-2009 05:10 AM
mdadm says "mdadm: /dev/md1 not identified in config file" when booting FC7 raffeD Linux - Server 1 08-11-2008 11:47 AM
2 logical Volumes in volume group "VolGroup00" now active shivaraj.shetty Linux - Kernel 4 04-15-2008 02:11 AM
RAID 5 with mdadm "spare" and "active sync" confusion ufmale Linux - Server 1 12-08-2007 10:31 AM
"mythtv-setup" giving "Session management error: Authentication Rejected" Mitchua Ubuntu 0 10-09-2005 04:32 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel

All times are GMT -5. The time now is 03:04 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration