Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi, I'm not sure how to do the question and for this I will give you an example:
I received a PC with ubuntu-server which has 3 HDDs with 8Tb each one. When I type "lsblk" I see them as one (22Tb, sdb), and then I mount them as one "sudo mount /dev/sdb /mnt/HDD".
I need to do the same with 4 SSD with 1.9Tb each one. With "lsblk" I see them as sdb, sdc, sdd, sde. I do not want to mount them in 4 different places or mount them 4 times under the same mount point. Is like to have 4 SSD and see them the system as 1.
I would assume the drives are connected to a hardware RAID controller as RAID 0. Without knowing anything about the hardware you can easily do the same thing with software using either LVM or as a software RAID 0 using mdadm.
As mickaelk said, raid 0 is striped and has no redundancy. One drive failure would lose all data.
Raid 5 would be more reliable while allowing for 1 drive failure and no data loss. 4 drives of 1.9TB each in raid 5 would give a 5.7TB array while having a safety net in case of one drive failure.
On my systems I run either raid 5 or raid 6 simply to allow for the risk of drive failure.
Raid is not a substitute for backups, but does provide a little more safety for failures.
Here is a case where LVM (Logical Volume Management) I think is a solution. It is not a RAID, but allows you to join disks together as one. personally I don't use it (no need, as I just size my disk(s) accordingly), but I 'think' that is what it was designed for. Anyway, something for you to look at.
Here is a case where LVM (Logical Volume Management) I think is a solution. It is not a RAID, but allows you to join disks together as one. personally I don't use it (no need, as I just size my disk(s) accordingly), but I 'think' that is what it was designed for. Anyway, something for you to look at.
LVM certainly was designed for similar function to RAID0 but without needing to use all the device space at one time. Once again, if you apply LVM on a JBOD [(J)ust a (B)unch (O)f (D)isks] array it gives you the larger space but no redundancy in case of failure and complete loss of data should a device fail.
The advantage to LVM is that it can allow seamless adjustment of a file system space without the need for repartitioning or array rebuilding when adding an additional physical disk to the array.
I use LVM on a large RAID6 array so I get the best of both worlds. I have redundancy (allowing for simultaneous failure of 2 drives with no data loss) in case of drive failure and I also have the flexibility of adjusting LV size as needed when my storage space requirements change, and can plan for expansion or reserve space as needed and live versus by the disk, offline, and partitioning while relocating data.
I built my 6TB RAID6 array (using 4 3TB drives) 7 years ago that I use with LVM as /home. Over time the LV I use as /home has grown from ~1TB to ~4TB and I have never needed to change the physical devices or partitioning -- only growing the LV as needed while the system is running. I did have one drive fail when it was ~3 months old, but the only downtime was long enough to remove the failed drive and plug in the replacement. No data loss and the array rebuilt while the system was operating. If that was a JBOD array even with LVM it would have killed my entire /home data space.
Last edited by computersavvy; 09-12-2022 at 10:07 AM.
Makes sense .
Downtime at my house (just the wife and I) is no biggie here with multiple computers around to get to what we want/need. But I do keep good backups (also some offsite -- NOT cloud) of all data and home folders -- so not worried if something breaks. If an OS drive dies, that would be just an excuse to reload a fresh, maybe newer version . Therefore I don't use RAID or LVM for any of my systems. Even the home server is just a set of single SSD drives (one 1TB for OS, one 2TB for entertainment, and one 2TB for home files and software development). The server is only managing 1.8 TBs of actual data, so expansion is not a problem (if I'll ever need to). To 'expand' it is simple to "make a current backup, replace with a bigger drive, and restore data". I don't expect to have to expand for several 'years'. Data doesn't grow that fast around here. I try to evaluate the risks, and keep it simple stupid (KISS principle).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.