LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 06-25-2021, 01:40 PM   #1
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Rep: Reputation: Disabled
Question nvme Hardware Raid installation and configuration


OS: Ubuntu 20.04
Motherboard: TUF X570
Processor: AMD Ryzen 3900x
Raid controller: ASUS Hyper M.2 X16 Gen 4 Card

To start, I have installed the Raid controller with 4 2TB nvmes. I have enabled PCIe RAID Mode in the BIOS and booted into Linux. NOTE: These drives are not for a Linux OS drive install, but other uses.

lspci -vv |grep -i raid

does not return any results

fdisk -l
Disk /dev/nvme0n1: 1.84 TiB, 2000398934016 bytes, 488378646 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/nvme1n1: 1.84 TiB, 2000398934016 bytes, 488378646 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/nvme2n1: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/nvme3n1: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Outside of the fact I need to fix the I/O size, I am missing something basic. I have setup a software raid using madm in the past. However, It is my understanding this is supposed to be a hardware raid device. Is that not correct? Is there something else that needs to be setup in bios or linux? What are the steps I need to take to setup a hardware based RAID 0 array for these drives? The reason I am confused is that lspci does not return a hardware RAID device, should it? Also, if I use madm to setup the drives, isn't that a software RAID configuration? If so, then how do I take advantage of this hardware RAID device other than having the 4 drives show up in linux? NOTE: They do show up as PCIe SSDs. The controller is installed in a PCIe slot with 4 nvme drives on it. I feel that if I use madm and setup a software raid configuration, I am not utilizing the speed and other benefits of a hardware raid device. Am I missing something here or does linux not recognize this device as a hardware raid device? Is there something I can do to make it recognize it has a hardware raid device or other ways to check for it? Please assist in properly setting up a hardware RAID 0 configuration, if possible with this hardware.
 
Old 06-25-2021, 02:21 PM   #2
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,985

Rep: Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626
Hardware raid usually has a F key at boot to configure them. Did you do that?
After that it will show up as something to the OS. For example if you configured it all as one array it will then show up to the distro as a single resource.
 
Old 06-25-2021, 02:41 PM   #3
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
I did not. Can you elaborate on how to do this?

To show I have been trying. The vendor site does not cover linux installation.
https://www.asus.com/us/Motherboards.../HelpDesk_QVL/

Last edited by Dimension1; 06-25-2021 at 02:47 PM.
 
Old 06-25-2021, 04:48 PM   #4
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
Question

I think I am getting closer, but could use a little more help.

I followed the AMD Bios setup information in chapter 2 here:
https://dlcdnets.asus.com/pub/ASUS/m..._EM_V4_WEB.pdf

However, now the disks show up with partitions labeled Linux swap / Solaris. Also, they are 14.6T. The drives are 4 - 2TB so I would expect 8TB. I am guessing swap adds some. Also, all 4 drives now show a partition of the same size. I would expect one partition since this is a RAID 0. Can anyone help me get this straight? I do not wish to user swap, only the drives in RAID 0 .

Disk /dev/nvme0n1: 1.84 TiB, 2000398934016 bytes, 488378646 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x11a89d7a

Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 63 3907029230 3907029168 14.6T 82 Linux swap / Solaris


Disk /dev/nvme3n1: 1.84 TiB, 2000398934016 bytes, 488378646 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x4dc5850c

Device Boot Start End Sectors Size Id Type
/dev/nvme3n1p1 63 3907029230 3907029168 14.6T 82 Linux swap / Solaris


Disk /dev/nvme1n1: 1.84 TiB, 2000398934016 bytes, 488378646 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xe3f9c479

Device Boot Start End Sectors Size Id Type
/dev/nvme1n1p1 63 3907029230 3907029168 14.6T 82 Linux swap / Solaris


Disk /dev/nvme2n1: 1.84 TiB, 2000398934016 bytes, 488378646 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xd96d1a48

Device Boot Start End Sectors Size Id Type
/dev/nvme2n1p1 63 3907029230 3907029168 14.6T 82 Linux swap / Solaris


======
free
total used free shared buff/cache available
Mem: 131896100 380708 131108904 1772 406488 130458072
Swap: 0 0 0
 
Old 06-25-2021, 05:02 PM   #5
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
even more confused at partitions now. This conflicts with fdisk ouutput. Please help. NOTE: swap not configured in fstab.

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 109.9M 1 loop /snap/cmake/882
loop1 7:1 0 110.1M 1 loop /snap/cmake/888
loop2 7:2 0 55.5M 1 loop /snap/core18/2074
loop3 7:3 0 69.9M 1 loop /snap/lxd/19188
loop4 7:4 0 67.6M 1 loop /snap/lxd/20326
loop5 7:5 0 32.3M 1 loop /snap/snapd/12159
loop6 7:6 0 32.1M 1 loop /snap/snapd/12057
loop7 7:7 0 55.4M 1 loop /snap/core18/2066
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 231.4G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 115.7G 0 lvm /
nvme0n1 259:0 0 1.8T 0 disk
└─nvme0n1p1 259:1 0 1.8T 0 part
nvme3n1 259:2 0 1.8T 0 disk
└─nvme3n1p1 259:3 0 1.8T 0 part
nvme1n1 259:4 0 1.8T 0 disk
└─nvme1n1p1 259:6 0 1.8T 0 part
nvme2n1 259:5 0 1.8T 0 disk
└─nvme2n1p1 259:7 0 1.8T 0 part
 
Old 06-25-2021, 05:11 PM   #6
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,985

Rep: Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626
Just to check. Does your computer have an internal drive also?
 
Old 06-25-2021, 05:21 PM   #7
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
It has 2 small SSD's ~80-120GB. One with Linux. The fdisk output shows the one with linux on it. One with Windows. I do not use Windows now and hope I don't have to in order to get this to work properly. The software I plan to use these drives for runs much better in Ubuntu.
 
Old 06-25-2021, 05:32 PM   #8
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
My bad. The linux SSD is 250GB and I didn't include it in my past of fdisk output. Here it is:

Disk /dev/sda: 232.91 GiB, 250059350016 bytes, 488397168 sectors
Disk model: Samsung SSD 870
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 45813697-34A4-4D85-8C46-41814D1FFFAD
 
Old 06-25-2021, 05:42 PM   #9
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
I was hoping someone else in the community has installed this card on linux and set it up that could help. I can't imagine I am the only one. I could always delete the partitions and use madm to create a software raid. However, I am not smart enough to know what a software raid configuration sitting on the hardware raid card configured in my bios would do to my I/O. So, I have avoided it. Since this PCIe card has great I/O and my motherboard has 16 lanes for its installed slot, I really want to use it properly. If anyone is using this card with linux or can at least guide me in the right direction, I would appreciate it. All internet searches for a solution have not proven effective or that helpful. This is why I have reached out to the linux community. I figure if anyone knows how to setup this hardware properly someone here would know. Any help is appreciated.
 
Old 06-25-2021, 05:50 PM   #10
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
For someone that is hardware savvy, if you look at the guide for the controller installation, on windows it requires a driver. Does that mean it is really not a hardware raid? Do you think the right configuration in linux is to setup a software raid on top of it?
 
Old 06-25-2021, 05:52 PM   #11
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
why in the world did the bios changes create a 14.6TB partition labeled Linux swap / Solaris? The raid size is ~8TB.
 
Old 06-25-2021, 06:20 PM   #12
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
Still looking for help. Bios images are here: https://imgur.com/a/HCkYZNx . Practically begging. I do not want to have to do this crap on windows. I think there is someone who can help in some way to get this figured out. anything?
 
Old 06-25-2021, 07:18 PM   #13
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,985

Rep: Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626
From Asus page. "server-grade PCB" I assume it means hardware raid not just that the material it is made from is somewhat good quality. Looks to be a high end card.

You can see from the Asus page it says that in some supported hardware, some of the control could be in the bios.

Also this. "*M.2 SSD support dependent on CPU and motherboard design. Please check the specifications and user manual of your motherboard. Update to the latest BIOS and set up PCIe bifurcation settings before using the NVMe RAID function."
 
Old 06-25-2021, 07:29 PM   #14
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
Thanks. I have checked specifications and user manual of my motherboard. This card is fully supported in the PCIe 16-1 slot with 16 lanes. PCIe bifurcation settings are only needed if using an Intel CPU with AMD (which I have) it is not needed. Also, there is no where to set it in Bios (see bios images). If you know of something I am overlooking please share.

This is my reference: https://www.asus.com/us/support/FAQ/1037507

If you see something I am missing, please share.

Last edited by Dimension1; 06-25-2021 at 07:43 PM.
 
Old 06-25-2021, 07:35 PM   #15
Dimension1
LQ Newbie
 
Registered: Jun 2021
Posts: 16

Original Poster
Rep: Reputation: Disabled
I tried mounting one of the partitions that was created to see what would happen if I copied a file to one. I was wondering if it would show up on all of them (planned to mount two for testing). I noticed the fdisk information showed they all had the same stats and the same ID. I started to wonder if they really are in a RAID before I go delete them. This is what I got:

mount /dev/nvme0n1p1 /mnt/test1
mount: /mnt/test1: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error.


So either they are not good partitions and I should delete them anyway or there is something I need to do to them I do not know about. Thoughts?
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Data in a Flash, Part II: Using NVMe Drives and Creating an NVMe over Fabrics Network LXer Syndicated Linux News 0 05-20-2019 11:41 PM
Migrate Linux/win10 dual boot from MBR nvme drive to a new GPT nvme drive bluemoo Linux - Software 7 09-25-2018 06:42 PM
SSD (NVMe/SATA) Performance Tweaking and Configuration Pete-L Linux - Server 3 08-29-2016 05:40 PM
Hardware raid vs software raid, is hardware raid portable? MikeyCarter Linux - Hardware 2 01-06-2016 03:30 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 11:59 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration