LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   Linux does not detect RAID10 array created with Intel C222 controller (https://www.linuxquestions.org/questions/linux-hardware-18/linux-does-not-detect-raid10-array-created-with-intel-c222-controller-4175610797/)

dolphs 07-28-2017 07:53 AM

Linux does not detect RAID10 array created with Intel C222 controller
 
Hi,

I owe an Asus P9D-I mainboard which comes with the Intel C222 controller.
This controller supports either LSI MegaRAID ( also linux ) or Rapid Storage Technology ( purely Win ).

Yet I like to use this board as a small hypervisor, eg with Xen or VMWareESX.
Unfortunately the RAID10 array created is not detected in Linux as it is software RAID.

So from this point onwards I could either go the blacklist ahci-route so the array hopefully gets detected when kernel drivers are supplied. OR I revert RAID to AHCI and build software RAID1+0/ RAID1 on ata3+ata4, while RAID0 on ata1+ata2:

dmesg | grep -i sata | grep 'link up'
[ 8.220258] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 8.220279] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 8.220299] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 8.276252] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)


I'd prefer to get proper drivers from Intel ( Asus ) or maybe even Broadcom to get ahci blacklisted while instead the RAID10 array created in LSI (broadcom nowadays) MegaRAID BIOS comes available to Linux.

Please note the RAID10 array will be used for storage to the Virtual Machines as another disk will be used booting XEN ( or VMWare ) ...


Anyone similar experience and can offer some help to get this going please?

smallpond 07-28-2017 09:09 AM

What is the output of "dmraid -l". It should recognize your RST RAID.

dolphs 07-28-2017 10:21 AM

Hi,

thanks for your resposne.
Most likely I need to enable RST ( instead of LSI ) , switching jumper, in that case in the BIOS as currently it shows:

[root@testxen ~]# dmraid -l

asr : Adaptec HostRAID ASR (0,1,10)
ddf1 : SNIA DDF1 (0,1,4,5,linear)
hpt37x : Highpoint HPT37X (S,0,1,10,01)
hpt45x : Highpoint HPT45X (S,0,1,10)
isw : Intel Software RAID (0,1,5,01)
jmicron : JMicron ATARAID (S,0,1)
lsi : LSI Logic MegaRAID (0,1,10)
nvidia : NVidia RAID (S,0,1,10,5)
pdc : Promise FastTrack (S,0,1,10)
sil : Silicon Image(tm) Medley(tm) (0,1,10)
via : VIA Software RAID (S,0,1,10)
dos : DOS partitions on SW RAIDs


Meanwhile I found an URL at intel that seems to be newer than ASUS' site
Intel:
https://downloadcenter.intel.com/dow...ver-for-Linux-

Asus:
https://www.asus.com/Commercial-Serv...Desk_Download/

jefro 07-28-2017 02:51 PM

Let me understand this.

You have a uefi bios and set it up as single uefi supported drives in bios and the bios see's all the drives? Then you installed some OS using md raid correct? Now you say you can't get the OS to see the drives? Pretty sure you want bios to just use the drives a dedicated drives and have nothing to do with any sort of "correction" soft raid just yet.

"The recommended software RAID implementation in Linux* is the open
source MD RAID package. Intel has
enhanced MD RAID to support RST
metadata and OROM and it is validate
d and supported by Intel for server
platforms."

https://www.intel.com/content/dam/ww...inux-paper.pdf

dolphs 07-28-2017 06:49 PM

OK just like to achieve what I set in my BIOS ( RAID10 ) can be used in Linux ( and later Xen or VMWare ):

Currently I see the disks are there but it is not a matter of just activating:

[root@tstxen ~]# dmraid -r
/dev/sdd: ddf1, ".ddf1_disks", GROUP, ok, 974608384 sectors, data@ 0
/dev/sdc: ddf1, ".ddf1_disks", GROUP, ok, 974608384 sectors, data@ 0
/dev/sdb: ddf1, ".ddf1_disks", GROUP, ok, 974608384 sectors, data@ 0
/dev/sda: ddf1, ".ddf1_disks", GROUP, ok, 974608384 sectors, data@ 0


[root@tstxen ~]# dmraid -s -s ddf1_disks
ERROR: ddf1: wrong # of devices in RAID set "ddf1_4c53492020202020808627c3000000004711471100001450" [1/2] on /dev/sdd
ERROR: ddf1: wrong # of devices in RAID set "ddf1_4c53492020202020808627c3000000004711471100001450" [1/2] on /dev/sdc
ERROR: ddf1: wrong # of devices in RAID set "ddf1_4c53492020202020808627c3000000004711471100001450" [1/2] on /dev/sdb
ERROR: ddf1: wrong # of devices in RAID set "ddf1_4c53492020202020808627c3000000004711471100001450" [1/2] on /dev/sda
ERROR: either the required RAID set not found or more options required
no raid sets and with names: "ddf1_disks"


[root@tstxen ~]# dmraid -ay -f lsi
no raid disks with format: "lsi"

This, lsi, is the one I would expect instead of ddf1.
Here is why I am a bit confused how to get this RAID10 array going.

Also The ASUS manual confuses me as it explicitly states the Intel Rapid Storage Technology enterprise SATA Option ROM is for Windows OS only.
Do I understand correctly I also can give this a shot in Linux ( and ultimately XEN ( Kernel 4.4 ))?

Thanks

jefro 07-28-2017 07:02 PM

Some others may correct me if I don't have this right. (and I may not)

I understand that the intel rapid storage is not currently a good choice to make a firmware raid in bios. I feel that you want to use the drives simply as a pool of drives. So don't set any feature for raid in bios or on motherboard.

You'd then install the OS so that you use the software raid (or ZFSraid,LVM or such) so that the drives then become a controlled software raid.

If you had a server level hardware raid, then you'd normally configure the bios to use the raid array and some boot order of it. Then you'd catch the raid bios after post and create a raid or two or more. Then you could boot to some media and select the raid array as a unit that is basically hidden to the OS. I mean the raid will appear as a unit.

However, I don't have you board in front of me to be sure. Anything less than a full hardware raid controller will give you goofy results. I don't think your chip is a full hardware raid based on Intel link I posted.

dolphs 07-28-2017 07:57 PM

The P9D-I manual reads:

Storage: SATA Controller

Intel C222
- 2x SATA 3GB/s
- 2x SATA 6GB/s
- Intel Rapid Storage Technology Enterprise (RSTe) supports SW RAID0,1,10 & 5 ( Windows )
- LSI MegaRAID driver supports software RAID0, 1 & 10 ( Windows and Linux )

Whenever I boot indeed the POST comes by and LSI MegaRAID pops up having an RAID10 array online.
Pressing <Ctrl> M brings me to the LSI Software RAID config utility.

In this case my Virtual Drive 0 has been set up with:
- RAID10
- DWC ON ( disk write cache )
- RA ON ( read agead )
- stripe 64K ( cannot be changed )

Final step is to initalise it

Once RAID array created from this config util I can even set regular BIOS to boot from it if I want, which is called SATA EMBEDDED RAID CONTROLLER.

But in my scenario I like to keep this RAID10 array separated for my VMs and boot from a "regular" disk.


Anyway you are right this is no hardware RAID, that is also why most likele storcli does not work and If I'd create the disks myself ( AHCI ) I need to be able the RAID1 part ends up on ata3+4 while the RAID0 part should end up on ata1+2: ( refer to start post ).
But again my hypervisor ( XEN or VMWare ) would not detect this I am afraid .. but first things first ...

AwesomeMachine 07-28-2017 10:28 PM

I'd use software raid before I used motherboard raid, unless you need to boot non-linux OSs off the same array.

dolphs 07-29-2017 03:52 AM

Hi, so I took things from another angle and installed CentOS7 on a separate disk.
Next upgraded to latest 7.3 and used Intel's file: MR_SWR_Driver_1.50-17.01.2016.1107.zip
This installs a new module to replace more or less (blacklist) ahci.


The module loaded for the RAID controller shows now:
00:1f.2 RAID bus controller: Intel Corporation 8 Series/C220 Series Chipset Family SATA Controller 1 [RAID mode] (rev 05)
Subsystem: ASUSTeK Computer Inc. Device 8552
Kernel driver in use: megasr
Kernel modules: ahci, megasr


Also output is getting satisfactory considering /dev/sdd:


[root@localhost test]# fdisk -l

Disk /dev/sda: 320.1 GB, 320072933376 bytes, 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disk label type: dos
Disk identifier: 0x0008cd51

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 562642943 281320448 83 Linux
/dev/sda2 562642944 625141759 31249408 82 Linux swap / Solaris

Disk /dev/sdd: 998.0 GB, 997998985216 bytes, 1949216768 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


From here the drive can be formatted I'd say ...

dolphs 07-31-2017 12:18 AM

so now this part worked in CentoS (RHEL7.3) therefore I set this particular thread to closed. thanks all


All times are GMT -5. The time now is 10:33 PM.