Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi All!
I have 2 1Tb hard drives. I would like to set them up in Raid-1 and then mount /var and /opt on it.
My question: should I create 1 mirror, md0, of the entire 1Tb, then create 2 partitions on md0, one for /var and 1 for /opt?
OR
create 2 mirrors on the disk; md0 of 400Gb and md1 of 600Gb and then mount /opt on md0 and /var on md1?
Is this a "6 of 1" situation, or is there a clear benefit of one over the other?
Thanks much!
Were it me and not using LVM (which would be MY preference) I'd probably go with your second option. That way if you ever have to adjust partitions you only have to do it on the md0 device.
Presumably your first option is based on creating matching partitions on the 2 x 1 TB disks so if you had to adjust partitions in future you'd have to do it on each of the underlying sd* devices (i.e. 2 devices) as opposed to the md0 (1 device using all of the 2 sd* devices.
As alluded to above I'm a fan of LVM because one can add, remove and extend LVs without having to worry about what is before and beyond them unlike partitioning where that is a major concern. Additionally if you have LVM you could set /opt and /var to sizes that don't take up all your space initially to allow for growing one or the other later should the need arise or even adding other LVs you aren't yet considering.
Thanks for the reply Mensa!
So, if I'm reading you correctly, an ideal config would have 1 "md" device mirroring the entire 1Tb disks to each other. Then on top of the "md" use LVM to create logical volumes for /var and /opt?
That's the way it would traditionally be done. These days LVM can define and manage RAID itself - much better option. LVM will make sure data (and metadata) is separated properly across the devices, and if more devices are added later. Also allows for failure policies so the loss of a device is handled auto-magically.
If I was doing it non-LVM I would go for your first option - maybe not all of the device(s), but large enough to allow for later growth. But much less flexible then LVM.
Edit: just noticed CentOS 5 in your list - better get to 7 (best) for this LVM support I would think.
As syg00 says LVM itself now has RAID capabilities including mirroring so were it me I'd likely do it all LVM leaving out metadisk RAID but as he also says you can do use your md0 as the PV for LVM.
CentOS5 does have LVM including mirroring capabilities.
If you are still using CentOS5 as he mentions you really should upgrade if possible. RHEL5 from which CentOS5 is EOL as of the end of this month so you should move on to CentOS6 at least and if possible CentOS7. 6 is similar enough to 5 that there isn't much learning curve. With 7 they changed to systemd vs init and firewalld as a front end for iptables so there is a bit of learning to do for how those work differently.
All good stuff!
Thanks for the input. I didn't mention it but the machine I'm working on is a CentOS 6.8 system. I just updated my profile! I've played around with LVM on CentOS 5 and wasn't too happy with it. I'll take a look at it on 6.8.
I've played around with LVM on CentOS 5 and wasn't too happy with it. I'll take a look at it on 6.8.
LVM has worked well for me on RHEL5 for many years. Early RHEL5 (.0, .1 etc...) had various limitations including no real support for ext4 filesystems but later versions (we had mostly 5.8, 5.9 5.10) work well for most operations in LVM and filesystem land.
Of course there are things that it doesn't support. Last year I found RHEL5's openssl doesn't support TLSv1.1 or higher and RedHat didn't intend to fix that.
With EOL of course there is almost nothing you might run across that will be fixed even if you pay for their extended support. After 10 years and two major version releases since initial RHEL5 one can hardly blame them for wanting folks to move forward.
I was always pretty lukewarm to LVM, but recent developments hsve convinced me of its merits. Give it another go.
6.8 may have what I suggested, maybe not - just check the online RHEL doco, it's always pretty good. Likewise the manpages.
Thanks for all the input! Very helpful. I've taken a step back and updated the server to CentOS 7. I may just go the whole LVM route. However, one thing that bothers me re: LVM, if you ever have to go into your system in a rescue mode, the LVM is not available so your data is unrecoverable. Is that true, or am I missing something about LVM?
Thanks!
Agreed - it used to be you had to look for LVM support (in a liveCD say), but these days even Ubuntu offers it as an install option, so I'm betting everything has it by default now.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.