Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm running the latest version of Ubuntu and I'm determined to have more memory storage for a potential gaming PC. I have a second hard drive I would like to "merge" (I don't know what to call it) with the first. I've looked up LVM (Logical Volume Manager) but it doesn't provide any clear instructions for exactly what to do. In summary, I wish to hook up my second hard drive along with the first and have the two recognized as one.
I can't recommend using btrfs (I've had kernel faults from raid1 and raid5, along with some loss of data), though raid 1 with btrfs works most of the time (don't mount another drive on a btrfs filesystem and export it via NFS...)
Unfortunately, I've yet to hear of anyone having much success with a RAID setup using Linux. Identical disks with identical firmware on them is recommended (or was, in the old SCSI days). Striping or RAID-0 was for performance, while mirroring or RAID-1 was for data redundancy. Ideally, four identical disks were needed, in one striped pair mirrored to the other striped pair.
For a newbie, maybe you are taking on something impractical. Perhaps your best bet is to max out the RAM for your machine with matched low latency sticks on a motheroard that allows for dual-channel.
In summary, I wish to hook up my second hard drive along with the fourth and have the two recognized as one.
Are these currently unused drives ?.
If so, contrary to the two members above, this looks like exactly what LVM was designed for. No need for similar sizes, can be expanded simply later in need to add more drives.
Personally I would (and do) use btrfs for things like this, but also wouldn't recommend it to someone not used to the environment.
Unfortunately, I've yet to hear of anyone having much success with a RAID setup using Linux. Identical disks with identical firmware on them is recommended (or was, in the old SCSI days). Striping or RAID-0 was for performance, while mirroring or RAID-1 was for data redundancy. Ideally, four identical disks were needed, in one striped pair mirrored to the other striped pair.
Hear from me then. I have four drives, SATA not SCSI, three of them 3TB, one 2TB. Four 2TB partitions, RAID 5. Works.
Quote:
Originally Posted by sidzen
For a newbie, maybe you are taking on something impractical.
That is probably true. mdadm is not that easy to use.
And:
Quote:
Originally Posted by syg00
Are these currently unused drives ?.
If so, contrary to the two members above, this looks like exactly what LVM was designed for. No need for similar sizes, can be expanded simply later in need to add more drives.
Personally I would (and do) use btrfs for things like this, but also wouldn't recommend it to someone not used to the environment.
Yes, this is what LVM is for. However, a combination of two disks is more likely to fail than a single disk, so that your filesystem is also more likely to fail.
Last edited by berndbausch; 11-25-2015 at 12:05 AM.
If the 1st drive is already used (& I suspect it is), you can simply create a mnt point somewhere on it and attach the 2nd drive there.
Note that LVM'ing 2 drives makes one big one, but if either one dies, you lose ALL the data on both (because its one FS).
Ensure you have up to date backups!
Unfortunately, I've yet to hear of anyone having much success with a RAID setup using Linux. Identical disks with identical firmware on them is recommended (or was, in the old SCSI days). Striping or RAID-0 was for performance, while mirroring or RAID-1 was for data redundancy. Ideally, four identical disks were needed, in one striped pair mirrored to the other striped pair.
Raid 0 with linear organization doesn't care about sizes. It just concatenates one drive onto the end of the other, and can be used to concatenate more than two drives.
Raid 1 does require matching drives as that is attempting to mirror the actions of one drive onto another. Mirroring can be done with more than two drives, but is inefficient when done as it wastes a drive.
Raid 5 needs at least three drives of the same size to provide storage space of two drives. The space of the third is used for recovery if a drive fails.
NOTE - "drive" does not necessarily mean physical drive. It is sufficient to have a partition that is the same size. The directions for setting up a raid 1 (or 5) recommend creating a partition of nearly the entire drive. This allows for variations in the size of disks (not all 3TB disks are the same, there will be varying number of bad spots and such). The left over space itself can STILL be used - by partitions composing a raid 0, or just used by themselves.
Quote:
For a newbie, maybe you are taking on something impractical. Perhaps your best bet is to max out the RAM for your machine with matched low latency sticks on a motheroard that allows for dual-channel.
This can be counted as a learning experience.
It is how I've used this, and tested it - and why I don't recommend btrfs, which has some really nice features - faster, more flexible, simple setup. BUT btrfs raid 5 (or raid 1) can lose your data when/if a partition gets damaged (I zeroed one for tests). Turns out it can't detect that particular failure... and the system gets kernel faults that freeze the system. The recovery tools didn't work either... Same test with md raid 5 (and raid 1) - during the failure, yes it didn't work (I/O errors) - but no kernel faults, and recovery worked (just flaw the zapped partition). The disadvantage was requiring manual intervention, the advantage was the system remained stable allowing recovery.
For your application you want JBOD ("just a bunch of disks").
Most onboard raid controller support it. Getting the proper drivers for linux installed and working might be a chore - consult your motherboard manufacturer.
you have 4 disks? or two disks and 4 partitions?
LVM will create a "virtual" partition of whatever and provide striping ( the increased speed) but then so will raid.
Both will increase stress to the hardware.
If you just need more space then raid0 or LVM with striping. Not sure if you can have the /boot directory in raid0.
Thank you, gentlemen, for clarificaion on RAID setups and LVM. This is for what the forums are designed!
Now, it would be nice to hear back from the OP . . .
you have 4 disks? or two disks and 4 partitions?
LVM will create a "virtual" partition of whatever and provide striping ( the increased speed) but then so will raid.
Both will increase stress to the hardware.
If you just need more space then raid0 or LVM with striping. Not sure if you can have the /boot directory in raid0.
Fred.
MOST systems use an initrd to identify all raid devices. Before that, it is up to the bios/EFI as to whether it will use it. For a raid 0 you don't usually use /boot for that (raid 1 mirroring is better) due to the small size of /boot (less than 1 GB). The advantage is that since it is mirrored one partition alone is actually used by the BIOS to load from - and that shouldn't matter whether it is raid 1 or not. Raid 0 would give problems due to the fact that a load file could span the two partitions - and the BIOS is unlikely to do that.
I'm saying it is up to the BIOS as to what you can do. It has to load grub from that disk... Most BIOS can/should handle reading from a specific disk - even if that disk is part of a mirror (raid 1).
As long as it can load that, you should be good to go. Grub2 does know some raid features - it is very nearly a kernel on its own.
Pesonally, I like having two partitions for boot (separate copy). That way if something happens to one - you can direct the BIOS to the other and still get the system up. It does mean that there is some additional work to maintain both copies - but it provides a live backup in case something happens (like accidental deletes).
/boot is small enough that having multiple independent copies is not a hardship even if it isn't raided.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.