Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi. I decided to add linux to my 20 year windows skillset, so a newb still.
My goal: Install Centos 7 on Intel hardware and learn the ins and outs of linux to be able to qualify for rhel certification.
My problem: C7 does not recognize my raid drive. It does not see any disks when I use the gui to do the install.
My hardware: Intel S5520HC motherboard with ESRT II raid controller with 2x 250Gb sata disks configured into raid 1.
What I tried: I changed the config a couple of times, even without the drives being configured in raid, just plain sata or ahci. I formatted the drives and broke and re-created the raid. I have done hours of research and the only article I could find points to boot record data that needs to be wiped at the end of the disk. No idea what that was about. I then installed Windows 2016 and then centos 5.8 and it picked up the raid drive and installed the OS's fine.
My questions:
1. Is there a way of injecting a driver during the install for the controller? I think Intel might actually have one for suse and rhel but I cannot see anywhere in the gui where it can be loaded. Keep in mind I am loading the iso via ilo.
2. Am I just missing something that can be configured in the gui during install to see the raid drive?
Some more info:
[root@localhost ~]# /sbin/lspci
00:00.0 Host bridge: Intel Corporation 5520 I/O Hub to ESI Port (rev 22)
00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 22)
00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 22)
00:05.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 5 (rev 22)
00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 22)
00:09.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 9 (rev 22)
00:10.0 PIC: Intel Corporation 7500/5520/5500/X58 Physical and Link Layer Registers Port 0 (rev 22)
00:10.1 PIC: Intel Corporation 7500/5520/5500/X58 Routing and Protocol Layer Registers Port 0 (rev 22)
00:11.0 PIC: Intel Corporation 7500/5520/5500 Physical and Link Layer Registers Port 1 (rev 22)
00:11.1 PIC: Intel Corporation 7500/5520/5500 Routing & Protocol Layer Register Port 1 (rev 22)
00:13.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub I/OxAPIC Interrupt Controller (rev 22)
00:14.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers (rev 22)
00:14.1 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 22)
00:14.2 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 22)
00:14.3 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Throttle Registers (rev 22)
00:15.0 PIC: Intel Corporation 7500/5520/5500/X58 Trusted Execution Technology Registers (rev 22)
00:16.0 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.1 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.2 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.3 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.4 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.5 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.6 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.7 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:1a.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #4
00:1a.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #5
00:1a.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6
00:1a.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2
00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1
00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5
00:1d.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1
00:1d.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2
00:1d.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3
00:1d.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
00:1f.0 ISA bridge: Intel Corporation 82801JIR (ICH10R) LPC Interface Controller
00:1f.2 RAID bus controller: Intel Corporation 82801JIR (ICH10R) SATA RAID Controller
00:1f.3 SMBus: Intel Corporation 82801JI (ICH10 Family) SMBus Controller
01:00.0 Ethernet controller: Intel Corporation 82575EB Gigabit Network Connection (rev 02)
01:00.1 Ethernet controller: Intel Corporation 82575EB Gigabit Network Connection (rev 02)
07:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02)
I will try to share with you what I think about your situation.
Normally your ESRT II raid controller will hid the 2 250 GB hard disks to the CentOS (or any operating system), and instead just expose to it a "single" 250 GB hard disk.
So you should not bother yourself with the RAID 1 (or any other) configuration.
Besides, I fully encourage your approach for being certified on RHEL 7 but learning by practising.
Go the this official Red Hat links, and realize that RAID (with the mdadm tool) is neither RHCSA or RHCE objectives.
>>>Normally your ESRT II raid controller will hid the 2 250 GB hard disks to the CentOS (or any operating system), and instead just expose to it a "single" 250 GB hard disk.
So you should not bother yourself with the RAID 1 (or any other) configuration.
This is only true for true hardware RAID controllers which are typically separate cards costing hundreds. Most on-board cheap controllers are "hostRAID" or what is called fakeRAID.
There is a fairly good explanation on this Arch Linux page.
You need to use dmraid to activate any existing raid sets. You would use a live distribution that has an installation ability.
# modprobe dm_mod
# dmraid -ay
# ls -la /dev/mapper/
>>>>1. Is there a way of injecting a driver during the install for the controller? I think Intel might actually have one for suse and rhel but I cannot see anywhere in the gui where it can be loaded. Keep in mind I am loading the iso via ilo
I would install the driver via a live distribution with modprobe. You will likely have to compile it from the tgz file.
@tshikose
Thanks for your reply. I agree that raid does not form part of the objectives of the certification, however, it does pose a fine opportunity for learning while troubleshooting. It will give me the kind of skill I will need out there which no module can teach, practical understanding.
@tofino_surfer
Thanks for your reply. Your comment is about 30k feet over my head but I am going to sit and work through it slowly until I understand what needs to be done. I presume you are referring to the live installer iso for the C7 release?(CentOS-7-x86_64-LiveKDE-1611)
Thanks for the articles, I will read through them and see if I can get this working.
Is modprobe part of all live installations?
I also assume that the live installation will give me the ability to get to a terminal session?
I do not know how to do a compile but again I will google it.
A little bit more information:
The ESRT raid card is indeed an on-board card on the intel motherboard and thus I agree it must be fake raid.
It has the following settings, some of it which disables the raid controller and manages the disks as if they were directly connected to the motherboard like in a desktop system:
Enhanced
Compatability
AHCI
SW-RAID
So I tried all of these settings, expecting at least compatibility mode, which supplies legacy connectivity, to preset a single viable disk to centos 7, but to no avail.
What is troubling me is the fact that C5 picks up the single raid drive present to it with no problem, but C7, which should be more advanced does not.
>>I also assume that the live installation will give me the ability to get to a terminal session?
The LiveKDE is a full KDE GUI that will allow you to open up the same terminal windows as a desktop session.
>>I do not know how to do a compile but again I will google it.
The contents of the tgz archive usually has a README with detailed compilation instructions so no searching is necessary.
It should be mentioned that this is a very complex thing to do from a Live distribution. The filesystem is in memory only so any changes you make are lost. You will need to mount a flash drive or external drive to save the compiled driver.
If it proves too difficult to do all of this from a Live environment an alternate installation method using a temporary installation is possible.
1. Find an external USB hard drive.
2. Do the installation with the packages you want to this external USB hard drive.
3. Reboot into this external USB and do a full yum system update.
4. Download the Intel tgz archive and compile the ESRT driver for the kernel you have.
5. Add this driver to your initrd by creating a dracut conf file.
6. Boot into Windows and use Windows tools to make space on the raid 1 drive.
7. Boot into Centos and use modprobe to load both the driver and dm_mod.
8. If you can see the raid 1 with an entry in /dev/mapper/ use Gparted to create partitions on the raid 1 for your root / and possible /home partitions.
9. Use cp or rsync to copy the contents of your / and /home partitions on the external HDD to the new partitions on the raid 1 through /dev/mapper.
10. Setup the grub2 bootloader to dual boot this new root directory and Windows.
You didn't mention if you have UEFI or older BIOS firmware so the last step will vary.
This is a very complex expert installation. If you can accomplish this you will learn a lot about Linux.
If you don't have any luck, you might want to try setting the drives as two normal (non-raid) drives, and then setting up software RAID via the installer. As far as I know, if the RAID you're currently using is indeed fake RAID, you don't lose very much performance by going to software RAID.
Another option would be to setup a 3rd disk as an OS disk, and once the OS is installed and updated add on the RAID disks, and set them to be your /home partition.
If you don't have any luck, you might want to try setting the drives as two normal (non-raid) drives, and then setting up software RAID via the installer. As far as I know, if the RAID you're currently using is indeed fake RAID, you don't lose very much performance by going to software RAID.
This would only work for a Linux only install. The OP already has Windows running on this fakeRAID and wants to dual boot with Centos 7. MS Windows won't work with Linux mdadm software RAID. The only reason to use fakeRAID instead of Linux mdadm software RAID is if you want both Linux and Windows to boot from and use the RAID.
I strongly recommend using a second disk for a temporary install, compiling the driver, and then migrating the Linux installation to the fakeRAID as I described in my post. It would be the simplest and with the least maintenance problems wrt updating the kernel as part of a full system update after an install.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.