Did you know LQ has a Linux Hardware Compatibility List?
Go Back > Forums > Linux Forums > Linux - General
User Name
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.


  Search this Thread
Old 05-31-2006, 09:08 PM   #1
LQ Newbie
Registered: May 2003
Location: Perth, Australia
Distribution: Slackware, Ubuntu
Posts: 18

Rep: Reputation: 0
Scalable, cheap disk storage? (Hypothetical, interested in ideas)

At the moment this is completley hypothetical and i'm just interested in your ideas and feedback, and also expanding my knowledge on the subject:

I'm wondering how you would go about creating a scalable and cheap (ie: cheaper than a SAN or massive SCSI array) array of SATA disks. Currently I have a box at home with 7 x 320GB SATA disks arranged in a RAID-5 array, giving about 1.9TB formatted. However, to resize an array of this size is headache inducing to say the least (thank god for UPS's!)

I'm interested in a solution that involves several machines and is scaleable, so I can add more nodes at a later date. It should be some sort of storage cluster, as it were. It doesn't necessarily have to actually be one big disk array internally, but from the outside (Samba access), it should appear as one "disk/folder". My thoughts on the matter is having several "slave nodes" which are much like my current system and simply have a large capacity RAID array on them, and then having a "master node" that can merge all the array's into one logical "drive", or use the storage provided by these arrays, and use some sort of filesystem (such as MogleFS to store the files, while using Samba to serve them up across the network.

What are your ideas? If I need to clarify my "requirements", let me know

Old 06-01-2006, 02:36 PM   #2
LQ Guru
Registered: Oct 2005
Location: Willoughby, Ohio
Distribution: linuxdebian
Posts: 7,232
Blog Entries: 5

Rep: Reputation: 190Reputation: 190
I would use a hardware RAID contrroller that allows hot swapping and array expansion on the fly such as the 3ware controllers.. add drive,issue expand array command, walk away... job is done.

otherwise I suppose a NAS solution using something like NASLite or freeNAS would work with multiple machines..
Old 06-08-2006, 02:12 PM   #3
Registered: Nov 2005
Location: New Jersey, USA
Distribution: SuSE
Posts: 492

Rep: Reputation: 31
Aside from the jealosy of 1.9 Tb....

Personally, the idea of having something that appears to be one "folder" at 1.9 Tb scares the sh** out of me. It seems like a point for too much failure, and too much misuse if it's anything other than a one-user system.

IMHO, anything that combines that much space into one logical "place" is just adding unneeded code, lag time, and complexity. If they're 320Gb drives, why not just have hda1, hda2, hda3, etc. in the old-fashioned way, mount them to folders with meaningful names, and call it a day?

On the useful side of advice, if you're storing that much data, you need a strong, reliable, fault-tolerant file system. I would recommend against ext2, or any other Linux default. though they may be journalling, they're not the best out there in terms of crash-worthiness.

I'd suggest OpenSolaris or one of the BSD systems (FreeBSD or NetBSD).

There's also a distro called FreeNAS which is a NAS package based on FreeBSD, and has a strong filesystem.
Old 06-08-2006, 10:00 PM   #4
Registered: Aug 2004
Distribution: LFS
Posts: 350

Rep: Reputation: 30
Sounds like LVM2 at is what you're looking for, it features scalable logical volumes, resizable partitions, snap copies, bad block relocation, partition table types that support > 2TB partitions, most of the modern file systems and a lot more. It works well with HA-clustering and as an iSCSI target.


Old 06-09-2006, 02:34 AM   #5
Registered: Apr 2002
Location: in a fallen world
Distribution: slackware by choice, others too :} ... android.
Posts: 23,041
Blog Entries: 11

Rep: Reputation: 907Reputation: 907Reputation: 907Reputation: 907Reputation: 907Reputation: 907Reputation: 907Reputation: 907
Originally Posted by jantman
IMHO, anything that combines that much space into one logical "place" is just adding unneeded code, lag time, and complexity. If they're 320Gb drives, why not just have hda1, hda2, hda3, etc. in the old-fashioned way, mount them to folders with meaningful names, and call it a day?
Because those won't give him the a) kick and b) redundancy
that a RAID-5 system does. Striping the data over several disks
IMPROVES performance, it doesn't deteriorate it (as a rule of
thumb, anyway. Happy to discuss the differences it makes to
e.g. an Oracle installation to have more disks laid out differently,
but I wouldn't want to miss out on redundancy in either way.)

Another good reason for wanting such large space is e.g. for
movie-remastering in a larger scale :}



Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Disk Storage (accross network) podollb Linux - Software 2 01-20-2006 12:31 AM
Disk Storage Advice Needed vdi_nenna Linux - Hardware 1 06-26-2003 07:42 PM
Building a cheap network-attached storage system Travis86 Linux - Networking 5 03-14-2003 12:42 PM
alternate paths to disk storage da_kidd_er Linux - Hardware 0 02-06-2003 10:18 AM
Config disk storage limit vwhk Linux - General 0 01-04-2002 02:09 AM

All times are GMT -5. The time now is 03:46 AM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration