Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a rather large network (35 or so units) which I serve with several file servers, however /home is on one server. I am trying to enhance reliability, and am considering the following:
/home as a single drive on the server
a software RAID on that same server, rsyncing /home to the RAID once or twice a day
Why not /home on the RAID? I am trying to keep power consumption down, and intra day reliability is not as important as inter day reliability. Also, I have no problem firing up the raid discs when doing an rsync, or even accessing data kept ont he raid, but I am trying to reduce the raid drive hours, and the power consumption, hence the single drive on strategy.
Comments?
I am also thinking of using a small SSD as cache, but I haven't dug into that as of yet.
Also the type raid, (1, 5, 6) has not been determined yet, and may be the topic of another post.
see above. RAID10 with 4 disks will most likely be your best power/performance/reliability and have that rsynced to a RAID10 that is 7 times the size as the /home RAID10.
logic behind 7x is simple. this will give you 5 days of roll backs for the users and also allow for a little bit of expansion before you need to increase the size of the RAID.
depending on how important write speed is you might want to look into RAID6 over RAID10 to increase storage space on the drives.
ex: RAID10 with 6 drives at 3TB each will provide you with just under 9TB total storage
RAID6 with 6 drives at 3TB each will provide you with just under 12TB total storage
both will provide you with 2 disks of redundancy and fail over. RAID10 has better write performance, but the same read performance over RAID6. so that will depend on what your needs are for the network.
Also with the new RHEL v7 using xfs if you have the RAM then software RAID for your small LAN will be more then powerful enough.
keep in mind that xfs will eat up your RAM if you are less then 8G total on the server or even 16G for that matter. Its roughly 1G for each TB or storage you want to software RAID for good performance with xfs from what I've read.
Nice link, but I differ with some of their numbers. For example, I see about 7 to 8W for every hard drive that is spun up. My current server runs about 58W at idle, and if everything spins up, it can draw peaks over 400W. I know that I can watch the power draw (at 120V) and see it jump up 8W or so with each drive that comes online. Their idle, at least for the drives I have, range from about 0.5 to 2W.
Also when I monitor line voltage power usage, I am also factoring in any inefficiencies with the power supply. Not as bad as it used to be, but it is still there.
Temperature controlled fans save more like 1%, but they keep the boxes quieter.
So not running 4 drives 24/7 has real savings. And it is much more than a percent, in my case.
keep in mind that xfs will eat up your RAM if you are less then 8G total on the server or even 16G for that matter. Its roughly 1G for each TB or storage you want to software RAID for good performance with xfs from what I've read.
That is a good data point. Several references suggested XFS.
High performance is not high on my list of objectives, rather it is reliability and power consumption.
I had also considered front-ending the hard drives with a SSD.
I am using Slackware, therefore sticking with Slackware would have some benefit, but I have yet to find someone who is using bm-cache. There would be a trade between using bm-cache and using xfs, it seems. How to spend your memory, and how much is needed.
Nice link, but I differ with some of their numbers. For example, I see about 7 to 8W for every hard drive that is spun up. My current server runs about 58W at idle, and if everything spins up, it can draw peaks over 400W. I know that I can watch the power draw (at 120V) and see it jump up 8W or so with each drive that comes online. Their idle, at least for the drives I have, range from about 0.5 to 2W.
I admit I was thinking 2.5" drives. (which might also be an option to reduce power consumption)
If you do want to work with just one drive active, I'd still consider a RAID-only solution:
Let's say you have three drives. Build e.g. a RAID 1 with those three drives. Make sure every drive is bootable. Then fail/remove two of them and put them to sleep. The remaing drive is your designated 'master' which is always running.
Once or trwice a day, readd one of the drives to the raid array. When the syncing is done, fail/remove/sleep. Alternate the two drives, so that you always have at least one consistent drive from which you could boot and rebuild the array when the master fails.
If you do want to work with just one drive active, I'd still consider a RAID-only solution:
That's a creative approach. My configuration would have a SSD as the system drive, so I would be booting from that.
I was figuring that the same SSD would be a cache as well. So I could take this a step further, and spin down the remaining RAID drive, until needed, with the idea that some things might be cached (or forced to be cached) in a dedicated partition on the SSD.
...slightly off topic, but I am learning that using various drive sizes in the RAID is not a good idea. I would have thought for RAID-6 or something similar, there would be a arbitrary size option.
Let's touch on filesystems...after reading, I am disinclined to use xfs or zfs, and rather use ext4. The former have nice features but support perhaps reliability due to support and implementation might be lacking. The later is more bulletproof, but lacks the checks which could enhance reliability.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.