LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 09-14-2016, 12:24 PM   #1
bkone
Member
 
Registered: Jun 2006
Distribution: SUSE, Red Hat, Oracle Linux, CentOS
Posts: 108

Rep: Reputation: 15
Older PCs with 1TB drive want to combine all space


I have 6 Dell OptiPlex 3010 PCs that are no longer being used. Each PC has 8GB of RAM and 1TB drive and I would like to put these to use. I was thinking about somehow making it so I could combine all the storage from the (6) 1TB drives. Not knowing much about software based storage like ceph is that even possible. Are there better uses for these PCs?
 
Old 09-14-2016, 01:46 PM   #2
rtmistler
Moderator
 
Registered: Mar 2011
Location: USA
Distribution: MINT Debian, Angstrom, SUSE, Ubuntu, Debian
Posts: 9,882
Blog Entries: 13

Rep: Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930Reputation: 4930
Well you can run Linux individually on all of those, but that's not your base intentions, sounds instead like you want to make a NAS. I'm just not sure if it's not better to combine some of those drives onto one PC and have like 3 drives per PC. Make two NAS systems and have one be a mirror of the other one. Not sure if there's app-ware out there already which you can hedge on for this.

The other traditional stuff would be to put them all online for things like distributed computing projects, SETI, etc. These days I'd be worried about them being used for nefarious reasons and that I couldn't monitor them well enough to avoid that.
 
Old 09-14-2016, 02:24 PM   #3
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
Blog Entries: 8

Rep: Reputation: 465Reputation: 465Reputation: 465Reputation: 465Reputation: 465
Wow, I wish I had the "problem" of figuring out what to do with that stuff! I'm guessing that these are compact PCs so combining multiple drives into a PC isn't so much an option.

Do you have a gigabit switch to connect them all together nice and fast? I'm going to assume so.

There are a number of approaches I'd be considering.

One thing that I've done before, but is rather messy, is to combine the drives using aufs (or unionfs). Basically, this is a file system that mashes together multiple file systems into the same directory tree. Where folders overlap, the contents appear to be combined. Sounds great in theory. In practice, I found it extremely frustrating trying to get files to go to the component volume I wanted. The good news is that a failed component still leaves all of the other readable. But the bad news is that this leaves behind bizarre gaps of files/folders that just went missing, and you'll probably not have much luck figuring out what they were.

A more sophisticated possibility I'd consider would be to combine software RAID (mdadm) with network block device (nbd). I have not tried this before, but I think it could work. Basically, nbd lets you access a hard drive remotely as if it were a local hard drive. Then, software RAID could be used to create a RAID array over it...perhaps a RAID10 array for redundancy (giving a total of 3TB space).

Each computer would have a small OS partition (perhaps 10GB in size) and a large data partition taking up the rest of the space. Five computers share those partitions via nbd to the main server. The main server accepts those nbd devices and has software RAID set up to combine them with the local partition into a RAID array. At that point, it can be shared via nfs (or samba) to the network.

Will this work? I'm honestly not sure. But it's what I'd be trying to do.

Also, I'd be looking at all that RAM and I'd be trying to figure out how to use that RAM like a ridiculously fast SSD. The OS of each server should really only take up a few hundred MB or RAM, leaving well over 7GB of RAM for a freaking fast tmpfs ramdisk. Each one of those is enough for an entire OS drive, for client machines to run at maxed out gigabit speeds. Or maybe combine them via nbd and software raid into a 44GB stupidly fast SSD...
 
Old 09-14-2016, 04:08 PM   #4
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,982

Rep: Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625Reputation: 3625
Energy wise it will be expensive to run 6 computers in order to combine the hard drives.

A storage enclosure might be cheaper. Put two drives in computer and 4 in box.

Might be better energy wise just to get an arm based nas box.

It won't be easy to use all 6 in terms of computing ability. Wish it were easy.
 
Old 09-14-2016, 04:22 PM   #5
springshades
Member
 
Registered: Nov 2004
Location: Near Lansing, MI , USA
Distribution: Mainly just Mandriva these days.
Posts: 317

Rep: Reputation: 30
The simplest solution is probably to "harvest" the useful parts and combine them into a better machine. Most standard desktops have the ports necessary to use the 6 hard drives and 4 of the RAM sticks (you're more likely to have the necessary ports with a full ATX motherboard instead of a micro-ATX motherboard). If you are able to buy or find an extra hard drive to install the OS to, using the additional storage as one drive is very easy. You can just use mdadm to make a RAID-5 or RAID-6 storage array and mount it wherever you like. Look at the following link, change the "level" argument to 5 or 6, and point mdadm at all 6 hard drives. Honestly, the hardest part is probably putting a new partition on each of the 6 drives.

http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/

Next up you could mount a data partition from each of the drives over the network. You can do that via NFS, but it's probably even easier to just use the sshfs command. If you create a bunch of partitions in your HOME since you have permissions to write there, you could create a bunch of folders like ~/computer1 ~/computer2 etc. Set up ssh keys (see link below) to avoid needing passwords. Install a server OS on each machine and make sure you install an OpenSSH server during the install and keep the user name the same across the machines. Then the command to mount would simply be sshfs computer1:/path/to/a/folder/you/have/permissions/in ~/computer1 and this could be put into a startup script to automatically mount everything.

The ssh keys link:
https://www.digitalocean.com/communi...up-ssh-keys--2

The most powerful and most complicated option is to set up the machines as a cluster. Rocks is designed for that.

http://www.rocksclusters.org/rocks-d...g-started.html
 
Old 09-14-2016, 04:33 PM   #6
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Other possibilities might be iSCSI or AoE (ATA over ethernet), but running a whole PC just to support each disk drive is still energy-expensive.

A storage enclosure to hold 6 drives isn't going to be any cheaper than a new 6TB drive, but of course you can't do RAID with a single drive.
 
Old 09-14-2016, 05:44 PM   #7
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
Blog Entries: 8

Rep: Reputation: 465Reputation: 465Reputation: 465Reputation: 465Reputation: 465
Quote:
Originally Posted by bkone View Post
I have 6 Dell OptiPlex 3010 PCs that are no longer being used. Each PC has 8GB of RAM and 1TB drive [...] Are there better uses for these PCs?
Okay, I've come up with a better use. Or at least, what I would do with them. I would combine the resources of 2 or maybe 3 PCs to make a nice fast system. If three drives fit in each, then a pair of them could be used. A simple software RAID0 for 3TB of storage in each; use rsync to make one a backup for the other.

But the fun part is utilizing 8GB of RAM on one to make a freaking fast pseudo-SSD for the other. I have a blog post explaining how to do nfs root without the extra complication of tftp and PXE booting. I'd take that and adapt it to make the root drive on the client be an nfs share of tmpfs on the other.

So basically, nearly all of the 8GB of one computer is available as a freaking fast SSD for the other to netboot off of. The other computer has all of its 8GB free for normal RAM usage. Both have a pretty zippy RAID0 array of 3TB for storage, with rsync used to periodically sync up the contents.

I just played around with the necessary /etc/export settings for sharing tmpfs via nfs for network booting:
Code:
/srv/nfsroot/ 192.168.1.37(rw,fsid=1,sync,no_root_squash)
and the /etc/fstab settings for a tmpfs with bigger than the default 50% size limit:
Code:
none /srv/nfsroot tmpfs size=95% 0 0
Beyond that, I just need a script to automatically copy from a hard drive backup to /srv/nfsroot (using cp -vax), and an rsync script to sync back changes to the backup (using rsync -vaxAX).

The bottom line? It takes a while for the server to be completely ready, because of the time required to populate /srv/nfsroot. But after that, the client workstation can boot up in a flash, and its OS drive is essentially a crazy fast SSD.

You also get a pretty fast 3TB RAID0 for storage of larger files, backed up to another 3TB RAID0. In addition, it's possible to set up the other computers as diskless PXE boot workstations. A diskless workstation in another room can be pretty quiet.
 
Old 09-14-2016, 06:13 PM   #8
Emerson
LQ Sage
 
Registered: Nov 2004
Location: Saint Amant, Acadiana
Distribution: Gentoo ~amd64
Posts: 7,661

Rep: Reputation: Disabled
Hmh ... two RAID 0 arrays. If one drive fails on each you are doomed. OTOH, something like RAIDZ2 would stay intact if any two drives failed. Plus extra bitrot protection.
 
Old 09-14-2016, 06:26 PM   #9
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Quote:
Originally Posted by IsaacKuo View Post
But the fun part is utilizing 8GB of RAM on one to make a freaking fast pseudo-SSD for the other.
How "freaking fast" is your network, including latency, and compared to an actual SSD?
 
Old 09-14-2016, 07:26 PM   #10
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
Blog Entries: 8

Rep: Reputation: 465Reputation: 465Reputation: 465Reputation: 465Reputation: 465
The redundancy of two RAID0 arrays is that one is an rsync backup of the other. I think that keeping the arrays local to a single computer each is worth more than whatever "fun" you'll have with a RAIDZ2 array when half the drives go down due to network disconnect or one computer going down.
 
Old 09-14-2016, 07:27 PM   #11
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
Blog Entries: 8

Rep: Reputation: 465Reputation: 465Reputation: 465Reputation: 465Reputation: 465
Quote:
Originally Posted by rknichols View Post
How "freaking fast" is your network, including latency, and compared to an actual SSD?
A good SSD will be a bit faster - the 125MB/sec limit of ethernet/nfs is lower than the 300MB/sec of the computer's SATA interface. But here's the thing - a good SSD costs a lot more than $0!
 
Old 09-14-2016, 07:52 PM   #12
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Quote:
Originally Posted by IsaacKuo View Post
A good SSD will be a bit faster - the 125MB/sec limit of ethernet/nfs is lower than the 300MB/sec of the computer's SATA interface. But here's the thing - a good SSD costs a lot more than $0!
Add in the buffering and protocol overhead, and you might be faster than a rotating disk (courtesy of the lack of seek time), but by no means will it be like a "freaking fast SSD".
 
Old 09-14-2016, 08:05 PM   #13
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
Blog Entries: 8

Rep: Reputation: 465Reputation: 465Reputation: 465Reputation: 465Reputation: 465
Quote:
Originally Posted by rknichols View Post
Add in the buffering and protocol overhead, and you might be faster than a rotating disk (courtesy of the lack of seek time), but by no means will it be like a "freaking fast SSD".
I said "freaking fast pseudo-SSD".

Anyway, I use nfsroot over gigabit all the time, and it is far and away faster than a local rotating disk. In real life usage, it feels as fast as a local SSD.

I have also done RAMboot - which is to say I customized the initrd to completely copy the entire OS onto a tmpfs drive. And while benchmarks would be orders of magnitude faster than an SSD, it honestly doesn't feel significantly faster. It just means practically everything is CPU limited. My main reason for using RAMboot was simply cost. That's the same reason I use nfs root to share an SSD among multiple computers also.

The bottom line is that it's freaking fast compared to using a rotating disk, at gigabit speeds.
 
Old 09-14-2016, 08:11 PM   #14
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
Blog Entries: 8

Rep: Reputation: 465Reputation: 465Reputation: 465Reputation: 465Reputation: 465
Speaking of which, RAMboot is another thing that could be done with those computers. Out-of-box, a Debian XFCE4 desktop install will fit on a 4GB thumbdrive, so loading it up entirely into RAM, uncompressed, will still leave 4+GB of RAM for normal use.

But like I said, it doesn't feel all that much faster than a gigabit nfs share in practice. The way stuff gets cached into RAM and writeback is delayed, the performance edge of RAMboot just doesn't make much of a practical difference.
 
Old 09-23-2016, 10:46 AM   #15
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
Blog Entries: 8

Rep: Reputation: 465Reputation: 465Reputation: 465Reputation: 465Reputation: 465
By the way, I put together a how-to on tmpfs nfsroot here:

http://www.linuxquestions.org/questi...omputer-37167/

That way, one computer can serve up its 8GB (plus swap on hard drive) as the OS partition for another computer, which can use its 8GB for normal RAM use.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Older pcs -- down for the count? bg368 Linux - Hardware 17 09-25-2011 08:08 AM
New Seagate 1TB USB ext HDD stiffs Linux boot sequence but older 230MB ditto is fine Richard Molton Linux - Hardware 11 06-29-2009 06:36 PM
Best use for several older PCs? robogymnast Linux - General 8 08-14-2008 07:23 AM
why 1tb hard drive has 878mb space only? pleasehelpme Linux - Newbie 2 12-10-2007 10:25 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 01:29 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration