LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 07-28-2017, 07:53 AM   #1
dolphs
Member
 
Registered: Nov 2003
Posts: 52

Rep: Reputation: 15
Linux does not detect RAID10 array created with Intel C222 controller


Hi,

I owe an Asus P9D-I mainboard which comes with the Intel C222 controller.
This controller supports either LSI MegaRAID ( also linux ) or Rapid Storage Technology ( purely Win ).

Yet I like to use this board as a small hypervisor, eg with Xen or VMWareESX.
Unfortunately the RAID10 array created is not detected in Linux as it is software RAID.

So from this point onwards I could either go the blacklist ahci-route so the array hopefully gets detected when kernel drivers are supplied. OR I revert RAID to AHCI and build software RAID1+0/ RAID1 on ata3+ata4, while RAID0 on ata1+ata2:

dmesg | grep -i sata | grep 'link up'
[ 8.220258] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 8.220279] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 8.220299] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 8.276252] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)


I'd prefer to get proper drivers from Intel ( Asus ) or maybe even Broadcom to get ahci blacklisted while instead the RAID10 array created in LSI (broadcom nowadays) MegaRAID BIOS comes available to Linux.

Please note the RAID10 array will be used for storage to the Virtual Machines as another disk will be used booting XEN ( or VMWare ) ...


Anyone similar experience and can offer some help to get this going please?

Last edited by dolphs; 07-28-2017 at 07:55 AM.
 
Old 07-28-2017, 09:09 AM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,151

Rep: Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264Reputation: 1264
What is the output of "dmraid -l". It should recognize your RST RAID.
 
Old 07-28-2017, 10:21 AM   #3
dolphs
Member
 
Registered: Nov 2003
Posts: 52

Original Poster
Rep: Reputation: 15
Hi,

thanks for your resposne.
Most likely I need to enable RST ( instead of LSI ) , switching jumper, in that case in the BIOS as currently it shows:

[root@testxen ~]# dmraid -l

asr : Adaptec HostRAID ASR (0,1,10)
ddf1 : SNIA DDF1 (0,1,4,5,linear)
hpt37x : Highpoint HPT37X (S,0,1,10,01)
hpt45x : Highpoint HPT45X (S,0,1,10)
isw : Intel Software RAID (0,1,5,01)
jmicron : JMicron ATARAID (S,0,1)
lsi : LSI Logic MegaRAID (0,1,10)
nvidia : NVidia RAID (S,0,1,10,5)
pdc : Promise FastTrack (S,0,1,10)
sil : Silicon Image(tm) Medley(tm) (0,1,10)
via : VIA Software RAID (S,0,1,10)
dos : DOS partitions on SW RAIDs


Meanwhile I found an URL at intel that seems to be newer than ASUS' site
Intel:
https://downloadcenter.intel.com/dow...ver-for-Linux-

Asus:
https://www.asus.com/Commercial-Serv...Desk_Download/

Last edited by dolphs; 07-28-2017 at 10:42 AM.
 
Old 07-28-2017, 02:51 PM   #4
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,001

Rep: Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629
Let me understand this.

You have a uefi bios and set it up as single uefi supported drives in bios and the bios see's all the drives? Then you installed some OS using md raid correct? Now you say you can't get the OS to see the drives? Pretty sure you want bios to just use the drives a dedicated drives and have nothing to do with any sort of "correction" soft raid just yet.

"The recommended software RAID implementation in Linux* is the open
source MD RAID package. Intel has
enhanced MD RAID to support RST
metadata and OROM and it is validate
d and supported by Intel for server
platforms."

https://www.intel.com/content/dam/ww...inux-paper.pdf

Last edited by jefro; 07-28-2017 at 06:57 PM.
 
Old 07-28-2017, 06:49 PM   #5
dolphs
Member
 
Registered: Nov 2003
Posts: 52

Original Poster
Rep: Reputation: 15
OK just like to achieve what I set in my BIOS ( RAID10 ) can be used in Linux ( and later Xen or VMWare ):

Currently I see the disks are there but it is not a matter of just activating:

[root@tstxen ~]# dmraid -r
/dev/sdd: ddf1, ".ddf1_disks", GROUP, ok, 974608384 sectors, data@ 0
/dev/sdc: ddf1, ".ddf1_disks", GROUP, ok, 974608384 sectors, data@ 0
/dev/sdb: ddf1, ".ddf1_disks", GROUP, ok, 974608384 sectors, data@ 0
/dev/sda: ddf1, ".ddf1_disks", GROUP, ok, 974608384 sectors, data@ 0


[root@tstxen ~]# dmraid -s -s ddf1_disks
ERROR: ddf1: wrong # of devices in RAID set "ddf1_4c53492020202020808627c3000000004711471100001450" [1/2] on /dev/sdd
ERROR: ddf1: wrong # of devices in RAID set "ddf1_4c53492020202020808627c3000000004711471100001450" [1/2] on /dev/sdc
ERROR: ddf1: wrong # of devices in RAID set "ddf1_4c53492020202020808627c3000000004711471100001450" [1/2] on /dev/sdb
ERROR: ddf1: wrong # of devices in RAID set "ddf1_4c53492020202020808627c3000000004711471100001450" [1/2] on /dev/sda
ERROR: either the required RAID set not found or more options required
no raid sets and with names: "ddf1_disks"


[root@tstxen ~]# dmraid -ay -f lsi
no raid disks with format: "lsi"

This, lsi, is the one I would expect instead of ddf1.
Here is why I am a bit confused how to get this RAID10 array going.

Also The ASUS manual confuses me as it explicitly states the Intel Rapid Storage Technology enterprise SATA Option ROM is for Windows OS only.
Do I understand correctly I also can give this a shot in Linux ( and ultimately XEN ( Kernel 4.4 ))?

Thanks

Last edited by dolphs; 07-28-2017 at 07:00 PM.
 
Old 07-28-2017, 07:02 PM   #6
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,001

Rep: Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629
Some others may correct me if I don't have this right. (and I may not)

I understand that the intel rapid storage is not currently a good choice to make a firmware raid in bios. I feel that you want to use the drives simply as a pool of drives. So don't set any feature for raid in bios or on motherboard.

You'd then install the OS so that you use the software raid (or ZFSraid,LVM or such) so that the drives then become a controlled software raid.

If you had a server level hardware raid, then you'd normally configure the bios to use the raid array and some boot order of it. Then you'd catch the raid bios after post and create a raid or two or more. Then you could boot to some media and select the raid array as a unit that is basically hidden to the OS. I mean the raid will appear as a unit.

However, I don't have you board in front of me to be sure. Anything less than a full hardware raid controller will give you goofy results. I don't think your chip is a full hardware raid based on Intel link I posted.

Last edited by jefro; 07-28-2017 at 07:04 PM.
 
Old 07-28-2017, 07:57 PM   #7
dolphs
Member
 
Registered: Nov 2003
Posts: 52

Original Poster
Rep: Reputation: 15
The P9D-I manual reads:

Storage: SATA Controller

Intel C222
- 2x SATA 3GB/s
- 2x SATA 6GB/s
- Intel Rapid Storage Technology Enterprise (RSTe) supports SW RAID0,1,10 & 5 ( Windows )
- LSI MegaRAID driver supports software RAID0, 1 & 10 ( Windows and Linux )

Whenever I boot indeed the POST comes by and LSI MegaRAID pops up having an RAID10 array online.
Pressing <Ctrl> M brings me to the LSI Software RAID config utility.

In this case my Virtual Drive 0 has been set up with:
- RAID10
- DWC ON ( disk write cache )
- RA ON ( read agead )
- stripe 64K ( cannot be changed )

Final step is to initalise it

Once RAID array created from this config util I can even set regular BIOS to boot from it if I want, which is called SATA EMBEDDED RAID CONTROLLER.

But in my scenario I like to keep this RAID10 array separated for my VMs and boot from a "regular" disk.


Anyway you are right this is no hardware RAID, that is also why most likele storcli does not work and If I'd create the disks myself ( AHCI ) I need to be able the RAID1 part ends up on ata3+4 while the RAID0 part should end up on ata1+2: ( refer to start post ).
But again my hypervisor ( XEN or VMWare ) would not detect this I am afraid .. but first things first ...

Last edited by dolphs; 07-28-2017 at 08:02 PM.
 
Old 07-28-2017, 10:28 PM   #8
AwesomeMachine
LQ Guru
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524

Rep: Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015
I'd use software raid before I used motherboard raid, unless you need to boot non-linux OSs off the same array.
 
Old 07-29-2017, 03:52 AM   #9
dolphs
Member
 
Registered: Nov 2003
Posts: 52

Original Poster
Rep: Reputation: 15
Hi, so I took things from another angle and installed CentOS7 on a separate disk.
Next upgraded to latest 7.3 and used Intel's file: MR_SWR_Driver_1.50-17.01.2016.1107.zip
This installs a new module to replace more or less (blacklist) ahci.


The module loaded for the RAID controller shows now:
00:1f.2 RAID bus controller: Intel Corporation 8 Series/C220 Series Chipset Family SATA Controller 1 [RAID mode] (rev 05)
Subsystem: ASUSTeK Computer Inc. Device 8552
Kernel driver in use: megasr
Kernel modules: ahci, megasr


Also output is getting satisfactory considering /dev/sdd:


[root@localhost test]# fdisk -l

Disk /dev/sda: 320.1 GB, 320072933376 bytes, 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disk label type: dos
Disk identifier: 0x0008cd51

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 562642943 281320448 83 Linux
/dev/sda2 562642944 625141759 31249408 82 Linux swap / Solaris

Disk /dev/sdd: 998.0 GB, 997998985216 bytes, 1949216768 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


From here the drive can be formatted I'd say ...
 
Old 07-31-2017, 12:18 AM   #10
dolphs
Member
 
Registered: Nov 2003
Posts: 52

Original Poster
Rep: Reputation: 15
so now this part worked in CentoS (RHEL7.3) therefore I set this particular thread to closed. thanks all

Last edited by dolphs; 07-31-2017 at 11:15 PM. Reason: closed
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] How to test RAID10 array performance [Debian Wheezy] gacanepa Linux - Server 2 09-24-2014 04:03 PM
DMRAID Multiple RAID10 Array Dependence + TrueCrypt Can't Access Secondary Array The00Dustin Linux - Software 0 06-24-2014 01:44 PM
Did my RAID10 storage array survive os reinstall? tops008 Linux - Server 5 08-14-2010 12:14 PM
Migrate RAID0 array to RAID10 with mdadm jimbro727 Linux - General 6 07-17-2009 07:01 PM
Can Intel SCSI RAID controller See Existing Disk Array in another Server? Runge_Kutta Linux - Server 3 08-18-2007 04:04 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 07:49 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration