Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hello,
I am setting up a file server for an office that needs to have good redundancy. The server is an old Dell Powervault 745N that was lying around. It has 4-40GB sata drives, and I need to know what the most redundant setup will be using fakeraid.
I tried setting up each disk as:
125MB boot mirrored
500MB swap striped
37GB / RAID5 (with LVM running on top of the RAID)
The server worked fine, and disks partitions failed gracefully (except swap) when I used the fail option in mdadm to artificially fail them. However, I pulled a disk to simulate physical failure, but the system locked up and the OS will no longer boot. This is obviously unacceptable when the machine is being used.
Should the swap spaces all be separate and not used in raid? Could LVM be causing problems? Help.
there's no way whatsoever you'd want to stripe raid, that's kinda nuts really... as you've seen, if one single disk in your array fails then not only do you lose some data potentially, you actually lose all your currently live swap data, and your system b0rks instantly.
if you want a more resilient system, then have more ram in it so that you don't need to utilize the swap in the first place. create a number of seperate swappartitions on seperate drives if you wish, and simply use them all, but that won't provide any more resilience in the event of a drive failing. you can however, easily swapoff a certain swap file if you wish to gracefully take a given disc out.
I suggest do not use software RAID. Hardware RAID is better. RAID-5 running from software will be slow and you will have to have hot spares. I suggest two RAID-1 (mirror) and joined them to either LVM2 or EVMS. Both LVM2 and EVMS will try RAID-0 when possible.
Another setup is RAID-15 and RAID-16. They use two RAID-5 or RAID-6 arrays in a RAID-1 array. Both are very expensive, but are very reduntant.
For swap, I suggest making a 1 GB partition on all four hard drives then use RAID-1. Use mkswap after making the RAID array. If three drives goes, Linux should not stall.
I know that software RAID blows in comparison to hardware, but financial issues leave me with little choice here. The person who bought the server a few years ago didn't know much about RAID and he just wanted the lower price. How unfortunate.
From the advice you two left me. I am thinking of setting up the machine like one of the following:
4 disks: 4 active Boot 128MB RAID1 all four identical Swap 500MB RAID1 two sets of two Root 37GB RAID1 two sets of two
4 disks: 3 active, 1 hot spare Boot 128MB RAID1 3 identical Swap 1000MB RAID1 3 identical Root 36.5GB RAID5 set of 3
I eschewed the use of a page file because I don't have any experience with them.
I consulted a server guru from a nearby department this morning before I checked this board, and he suggested the hot spare as well when using RAID5.
i'm not massively experienced in raid, but the simplicity of raid1 compared to raid5 is very attractive, and also the processing overhead is (i think) less as you aren't calcuating parity bits constantly for 1 of three disks. expansion on raid5 is also more complex. if you have some raid1's and want more disk space you just buy two more drives and have anotehr raid1 instance (possibly with an LVM volume being spread across two raid1's if you felt like it.
After noting your ideas/concerns, I just got the okay for RAID1.
I guess I'll use lvm on top of the two sets of RAID1 in order to tie them together. Alternatively, I could combine the RAID1 devices into a RAID0 device, but that could be problematic.
This machine will probably be overhauled if more space is needed. It is a tiny 1u rackmount, so it has four removable disks in the front and has no room for any more.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.