[SOLVED] PIKE 2308 Hardware RAID seems okay but I'm not sure . . .
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
PIKE 2308 Hardware RAID seems okay but I'm not sure . . .
Hi all of you out there,
A while ago I asked a question about my ASUS PIKE 2308 RAID controller.
I wanted to check if CentOS 7 was supporting the card.
I wanted to install Nethserver 7.
Well Nethserver is a bit disappointing.
So I thought well the only things I really need are Samba DNS an LAMP.
That shouldn't be to hard to install myself.
I choose Ubuntu 16.04 LTS.
Everything looks okay I just wanted to check what the experts have to say about it.
I analyzed the boot output of what I think is coming from the kernel.
Here are the interesting bits:
What's all this about RAID 6? I configured a RAID 1.
Then the relevant contents of my /dev directory:
Code:
sda1
sda2
sda5
sda6
sda7
sdb
sdc
Why doesn't sdb have 5 partitions too?
And fstab
Code:
/etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdc6 during installation
UUID=f65aa074-9b7e-4d22-af5a-db13ba799704 / ext4 errors=remount-ro 0 1
# /boot was on /dev/sdc1 during installation
UUID=3389966b-750d-4af4-a483-8bfde29dd109 /boot ext4 defaults 0 2
# /data was on /dev/sdc7 during installation
UUID=204b17ca-0f9a-4d6c-896b-d689db060fe5 /data ext4 defaults 0 2
# swap was on /dev/sdc5 during installation
UUID=cec753fc-75ba-4f76-9cc3-507b0ce5625b none swap sw 0 0
The only thing that I found to be strange is that mdadm was installed. mdadm is for Fake RAID so why is it installed?
Can I safely remove the mdadm package?
Thanks in advance for your time and checking my findings ;-)
What's all this about RAID 6? I configured a RAID 1.
That's just the software RAID subsystem benchmarking a number of different algorithms to figure out which is the fastest. This happens because you have software RAID support compiled into your kernel. Just ignore it.
These, however, are the relevant log entries for your RAID controller:
As you can see, your RAID 1 volume is initialized and working properly.
Quote:
Originally Posted by vitronix
Then the relevant contents of my /dev directory:
Code:
sda1
sda2
sda5
sda6
sda7
sdb
sdc
Why doesn't sdb have 5 partitions too?
Why would it have? It's not related to your hardware RAID. (Looks like it might be a memory card reader of some sort.)
The whole point of hardware RAID is that the RAID controller handles all the work. All the OS sees, is one or more logical RAID volumes appearing as if they were regular block devices.
The individual physical drives in a hardware RAID set are not exposed to the OS at all. In order to manage a hardware RAID set (check status, rebuild, verify/scrub, expand, convert etc.) you'll have to use software designed to communicate directly with the controller (through the driver).
Quote:
Originally Posted by vitronix
The only thing that I found to be strange is that mdadm was installed. mdadm is for Fake RAID so why is it installed?
Can I safely remove the mdadm package?
mdadm is included in most distributions. If you don't use it, you can safely remove it.
You'll still see md-related entries in the log, though, unless you also replace the kernel with one without software RAID support. I wouldn't bother with that unless I was running Linux on an embedded system with very little RAM.
I thought why not test the whole thing?
So I removed a SATA cable from one HDD and booted.
The LSI BIOS detected the malfunctioning of the hard disk.
But the system booted as normal.
However when I disconnected the other HDD the system didn't boot anymore.
So what am I to do? clearly I did something wrong . . .
I am eager for an answer.
I think I should have five partitions on BOTH disks but how do I install that?
Vitronix
Last edited by vitronix; 05-02-2017 at 09:58 AM.
Reason: typo
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524
Rep:
If you lose the other drive you might have to connect the good one to the same controller port that worked in your test. Then you can replace the drive and rebuild the array.
So I removed a SATA cable from one HDD and booted.
The LSI BIOS detected the malfunctioning of the hard disk.
But the system booted as normal.
Yes, the controller will see the RAID metadata on the remaining disk, and will mark the RAID as "degraded" before continuing booting.
Quote:
Originally Posted by vitronix
However when I disconnected the other HDD the system didn't boot anymore.
So what am I to do? clearly I did something wrong . . .
If you briefly reconnected both disks and then disconnected the first, this is to be expected.
When a missing drive comes back online, the metadata will show it to be out of sync. The controller will then mark the new drive as out-of-sync and typically initiate a rebuild, and until that process has completed, the RAID remains degraded. Disconnecting the first drive before the rebuild is complete will leave you with one drive without valid data.
Quote:
Originally Posted by vitronix
I think I should have five partitions on BOTH disks but how do I install that?
You don't. /dev/sdb is NOT the second drive in your array.
/dev/sda is the logical RAID volume, and neither physical disk is directly visible to the OS. That's how hardware RAID works.
What the operating system sees, as is clear in the fstab file is sdc
sda and sdb are the drives used for building the array.
My expectations of RAID 1 where too high.
As mentioned when you disconnect sdb the RAID controller will start to sync drives.
You have to wait until everything is rebuilt and then you can boot again.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.