Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How do I set up 2 HDDs with RAID 1, so that I can then split that raid in half, into 2 partitions, commit one partition to NTFS-based storage for use by Windows on one SSD, and commit the other partition to EXT4-based storage for use by Linux on another SSD?
Motherboard:
Asus Z170-PRO ATX LGA1151
OS:
Ubuntu Server
Windows 10 Pro
Disclaimer:
I'm just asking how to do this, not why I should or shouldn't.
I will have a separate setup for backups to send data to from storage.
How do I set up 2 HDDs with RAID 1, so that I can then split that raid in half, into 2 partitions, commit one partition to NTFS-based storage for use by Windows on one SSD, and commit the other partition to EXT4-based storage for use by Linux on another SSD?
Motherboard:
Asus Z170-PRO ATX LGA1151
Since you don't tell us what version/distro of Linux you're using, or what kind of RAID you're talking about, we can't tell you. If you have hardware RAID, it should present two disks as a single disk. From there, make your partitions however you see fit, and format them..they're just disk partitions. Same with software RAID, but setting that up depends on your OS.
Personally, I'd not do it at all...the difference in partition types on one RAID volume would probably negatively impact your system. Why do you need to/want to do this? If you're needing Windows systems to access a Linux server, I'd keep the entire RAID volume as a Linux-native (ext4, jfs, zfs, etc.), and just use Samba to let them write to it.
Disclaimer:
I'm just asking how to do this, not why I should or shouldn't.
I will have a separate setup for backups to send data to from storage.
Quote:
Originally Posted by TB0ne
Since you don't tell us what version/distro of Linux you're using, or what kind of RAID you're talking about, we can't tell you. If you have hardware RAID, it should present two disks as a single disk. From there, make your partitions however you see fit, and format them..they're just disk partitions. Same with software RAID, but setting that up depends on your OS.
Distro/version = ubuntu server, current
I already mentioned what kind of RAID, I said RAID 1.
I also listed the motherboard, it has the RAID capability built into it.
So if I set up the 2 disks to RAID 1, then I am assuming that means the systems during install will immediately recognize it as 1 disk; or if I set RAID up via software, that would get a bit confusing if I set it up in Linux, but Windows will also use it, or vice versa.
Quote:
Personally, I'd not do it at all...the difference in partition types on one RAID volume would probably negatively impact your system. Why do you need to/want to do this? If you're needing Windows systems to access a Linux server, I'd keep the entire RAID volume as a Linux-native (ext4, jfs, zfs, etc.), and just use Samba to let them write to it.
See updated OP disclaimer. I'm not asking to discuss the why of it, just asking for the how.
I am curious how using a RAID setup with something like mdadm would be better than directly through the motherboard.
I doubt Windows would even see a RAID set up in Linux, although I've never tried it.
It wouldn't. I'm looking for a way to use motherboard raid so both OS types could recognize it.
options:
raid card - this would definitely work
mobo raid - this might work
soft raid - platform dependent
The conflict with the mobo raid appears to be mismatching hardware - m2 ssd & sata ssd - whereas it seems only one of those would work for recognizing a raid array, even if they themselves are not included in an array. the hdd x2 = intended raid, the hdd = sata, so the sata ssd would see it, theoretically, but the m2 ssd would not. ...maybe.
Disclaimer:
I'm just asking how to do this, not why I should or shouldn't. I will have a separate setup for backups to send data to from storage.
Well, if you don't want people to give you opinions, why ask on a public forum??? Sorry for trying to give you advice and help you. Won't happen again.
Quote:
Distro/version = ubuntu server, current I already mentioned what kind of RAID, I said RAID 1.
Right....RAID one is the RAID Level. Type of RAID means either hardware or software.
Quote:
I also listed the motherboard, it has the RAID capability built into it.
So we have to look up your motherboard to get the specs, so we can try to help you? And we don't know if you're using an external controller or not...we don't know details unless you provide them.
Quote:
So if I set up the 2 disks to RAID 1, then I am assuming that means the systems during install will immediately recognize it as 1 disk; or if I set RAID up via software, that would get a bit confusing if I set it up in Linux, but Windows will also use it, or vice versa.
Hardware RAID presents a disk array as a single 'drive' to the operating system. What happens from there is dependent on the OS.
Quote:
See updated OP disclaimer. I'm not asking to discuss the why of it, just asking for the how. I am curious how using a RAID setup with something like mdadm would be better than directly through the motherboard.
It wouldn't be, and software RAID is always slower, and TOTALLY dependent on the OS.
Based on the details you provided, your question should be "I'd like to dual-boot my system with Ubuntu 16.04 and Windows 10 on a single RAID1 array. Is this possible?"
The answer is "yes". As said before, hardware RAID will present a single drive to whatever comes after BIOS...be that Linux or Windows. You'll have to install Windows FIRST, and there is ample documentation on how to do this, such as this tutorial: https://turbofuture.com/computers/Du...and-Windows-10
I realize this statement is general, but there is plenty of evidence that this is not true in many cases in this day and age.
Just as a fleeting off the cuff example - Backblaze, one of the bigger online storage companies, uses a custom software RAID on their 150 Petabyte storage cluster.
"For Backblaze Vaults, we threw out the Linux RAID software we had been using and wrote a Reed-Solomon implementation from scratch. It was exciting to be able to use our group theory and matrix algebra from college. We’ll be talking more about this in an upcoming blog post."
I realize this statement is general, but there is plenty of evidence that this is not true in many cases in this day and age.
Just as a fleeting off the cuff example - Backblaze, one of the bigger online storage companies, uses a custom software RAID on their 150 Petabyte storage cluster.
"For Backblaze Vaults, we threw out the Linux RAID software we had been using and wrote a Reed-Solomon implementation from scratch. It was exciting to be able to use our group theory and matrix algebra from college. We’ll be talking more about this in an upcoming blog post."
Very true, but that's an edge-case. Custom software/hardware will perform better, much like a tailored suit will fit better than off-the-rack. And personal hardware has reached a point where the differences are negligible, but I don't think I'd ever trust software RAID as I do hardware...too many bad experiences.
Absolutely,.. I understand the hesitance. I've had my bad experiences in the past as well.
A quick anecdote though. I have been in 20-30 datacenters across America over the last few years, including Rackspace, Digital Ocean and Level 3 among others. Most of the setup includes huge jbod storage that uses software raid now. It's come a LONG way from the linux 2.4-ish days and is the preferred solution in the places I've worked.
I do understand where trusting hardware raid comes from, but the vast majority I've seen Lately has been software raid and it is bulletproof enough these days to be used by the bigger companies. So IMO it's ok to dip your toe into it now without much worry.
Absolutely,.. I understand the hesitance. I've had my bad experiences in the past as well.
A quick anecdote though. I have been in 20-30 datacenters across America over the last few years, including Rackspace, Digital Ocean and Level 3 among others. Most of the setup includes huge jbod storage that uses software raid now. It's come a LONG way from the linux 2.4-ish days and is the preferred solution in the places I've worked.
I do understand where trusting hardware raid comes from, but the vast majority I've seen Lately has been software raid and it is bulletproof enough these days to be used by the bigger companies. So IMO it's ok to dip your toe into it now without much worry.
Yeah, but I see the reason for the large data centers, since you're having to carve things up for customers on the fly, all the time. Someone wants to throw another bag of $$$ at you for more disk? Sure! Just extend the volume.
But for a physical server, I'd rather get a RAID controller with a huge cache and some big throughput, and let it do the grunt work.
Based on the details you provided, your question should be "I'd like to dual-boot my system with Ubuntu 16.04 and Windows 10 on a single RAID1 array. Is this possible?"
I found out that it is indeed possible.
Quote:
Originally Posted by szboardstretcher
"For Backblaze Vaults, we threw out the Linux RAID software we had been using and wrote a Reed-Solomon implementation from scratch. It was exciting to be able to use our group theory and matrix algebra from college. We’ll be talking more about this in an upcoming blog post."
How the software's written can make a difference between night and day, I'm not surprised that their custom RAID is better.
Quote:
Originally Posted by szboardstretcher
I have been in 20-30 datacenters across America over the last few years, including Rackspace, Digital Ocean and Level 3 among others. Most of the setup includes huge jbod storage that uses software raid now. It's come a LONG way from the linux 2.4-ish days and is the preferred solution in the places I've worked.
It's impressive that companies are trusting software raid enough to use it.
I'm curious what the Rackspace and Digital Ocean datacenters are like.
I recently had a tour of a large datacenter for a private company, it's basically its own campus, very impressive. The security they have set up is intense, can't even get on campus without the right access, it's even more difficult to get past the front door of the main facility without a background check, etc. Professional security, etc. then fancy Mission Impossible like tech inside. Pretty neat stuff.
So I managed to get the RAID working using only the motherboard's RAID capabilities.
Motherboard > set to RAID1 > create volume with 2 HDDs > boot one OS > take half the raid > boot other OS > take the other half of the raid. The trick to getting it to work was disabling unused drive bays, as only a limited number can be used with RAID enabled.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.