Older PCs with 1TB drive want to combine all space
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Older PCs with 1TB drive want to combine all space
I have 6 Dell OptiPlex 3010 PCs that are no longer being used. Each PC has 8GB of RAM and 1TB drive and I would like to put these to use. I was thinking about somehow making it so I could combine all the storage from the (6) 1TB drives. Not knowing much about software based storage like ceph is that even possible. Are there better uses for these PCs?
Well you can run Linux individually on all of those, but that's not your base intentions, sounds instead like you want to make a NAS. I'm just not sure if it's not better to combine some of those drives onto one PC and have like 3 drives per PC. Make two NAS systems and have one be a mirror of the other one. Not sure if there's app-ware out there already which you can hedge on for this.
The other traditional stuff would be to put them all online for things like distributed computing projects, SETI, etc. These days I'd be worried about them being used for nefarious reasons and that I couldn't monitor them well enough to avoid that.
Wow, I wish I had the "problem" of figuring out what to do with that stuff! I'm guessing that these are compact PCs so combining multiple drives into a PC isn't so much an option.
Do you have a gigabit switch to connect them all together nice and fast? I'm going to assume so.
There are a number of approaches I'd be considering.
One thing that I've done before, but is rather messy, is to combine the drives using aufs (or unionfs). Basically, this is a file system that mashes together multiple file systems into the same directory tree. Where folders overlap, the contents appear to be combined. Sounds great in theory. In practice, I found it extremely frustrating trying to get files to go to the component volume I wanted. The good news is that a failed component still leaves all of the other readable. But the bad news is that this leaves behind bizarre gaps of files/folders that just went missing, and you'll probably not have much luck figuring out what they were.
A more sophisticated possibility I'd consider would be to combine software RAID (mdadm) with network block device (nbd). I have not tried this before, but I think it could work. Basically, nbd lets you access a hard drive remotely as if it were a local hard drive. Then, software RAID could be used to create a RAID array over it...perhaps a RAID10 array for redundancy (giving a total of 3TB space).
Each computer would have a small OS partition (perhaps 10GB in size) and a large data partition taking up the rest of the space. Five computers share those partitions via nbd to the main server. The main server accepts those nbd devices and has software RAID set up to combine them with the local partition into a RAID array. At that point, it can be shared via nfs (or samba) to the network.
Will this work? I'm honestly not sure. But it's what I'd be trying to do.
Also, I'd be looking at all that RAM and I'd be trying to figure out how to use that RAM like a ridiculously fast SSD. The OS of each server should really only take up a few hundred MB or RAM, leaving well over 7GB of RAM for a freaking fast tmpfs ramdisk. Each one of those is enough for an entire OS drive, for client machines to run at maxed out gigabit speeds. Or maybe combine them via nbd and software raid into a 44GB stupidly fast SSD...
The simplest solution is probably to "harvest" the useful parts and combine them into a better machine. Most standard desktops have the ports necessary to use the 6 hard drives and 4 of the RAM sticks (you're more likely to have the necessary ports with a full ATX motherboard instead of a micro-ATX motherboard). If you are able to buy or find an extra hard drive to install the OS to, using the additional storage as one drive is very easy. You can just use mdadm to make a RAID-5 or RAID-6 storage array and mount it wherever you like. Look at the following link, change the "level" argument to 5 or 6, and point mdadm at all 6 hard drives. Honestly, the hardest part is probably putting a new partition on each of the 6 drives.
Next up you could mount a data partition from each of the drives over the network. You can do that via NFS, but it's probably even easier to just use the sshfs command. If you create a bunch of partitions in your HOME since you have permissions to write there, you could create a bunch of folders like ~/computer1 ~/computer2 etc. Set up ssh keys (see link below) to avoid needing passwords. Install a server OS on each machine and make sure you install an OpenSSH server during the install and keep the user name the same across the machines. Then the command to mount would simply be sshfs computer1:/path/to/a/folder/you/have/permissions/in ~/computer1 and this could be put into a startup script to automatically mount everything.
I have 6 Dell OptiPlex 3010 PCs that are no longer being used. Each PC has 8GB of RAM and 1TB drive [...] Are there better uses for these PCs?
Okay, I've come up with a better use. Or at least, what I would do with them. I would combine the resources of 2 or maybe 3 PCs to make a nice fast system. If three drives fit in each, then a pair of them could be used. A simple software RAID0 for 3TB of storage in each; use rsync to make one a backup for the other.
But the fun part is utilizing 8GB of RAM on one to make a freaking fast pseudo-SSD for the other. I have a blog post explaining how to do nfs root without the extra complication of tftp and PXE booting. I'd take that and adapt it to make the root drive on the client be an nfs share of tmpfs on the other.
So basically, nearly all of the 8GB of one computer is available as a freaking fast SSD for the other to netboot off of. The other computer has all of its 8GB free for normal RAM usage. Both have a pretty zippy RAID0 array of 3TB for storage, with rsync used to periodically sync up the contents.
I just played around with the necessary /etc/export settings for sharing tmpfs via nfs for network booting:
and the /etc/fstab settings for a tmpfs with bigger than the default 50% size limit:
Code:
none /srv/nfsroot tmpfs size=95% 0 0
Beyond that, I just need a script to automatically copy from a hard drive backup to /srv/nfsroot (using cp -vax), and an rsync script to sync back changes to the backup (using rsync -vaxAX).
The bottom line? It takes a while for the server to be completely ready, because of the time required to populate /srv/nfsroot. But after that, the client workstation can boot up in a flash, and its OS drive is essentially a crazy fast SSD.
You also get a pretty fast 3TB RAID0 for storage of larger files, backed up to another 3TB RAID0. In addition, it's possible to set up the other computers as diskless PXE boot workstations. A diskless workstation in another room can be pretty quiet.
Hmh ... two RAID 0 arrays. If one drive fails on each you are doomed. OTOH, something like RAIDZ2 would stay intact if any two drives failed. Plus extra bitrot protection.
The redundancy of two RAID0 arrays is that one is an rsync backup of the other. I think that keeping the arrays local to a single computer each is worth more than whatever "fun" you'll have with a RAIDZ2 array when half the drives go down due to network disconnect or one computer going down.
How "freaking fast" is your network, including latency, and compared to an actual SSD?
A good SSD will be a bit faster - the 125MB/sec limit of ethernet/nfs is lower than the 300MB/sec of the computer's SATA interface. But here's the thing - a good SSD costs a lot more than $0!
A good SSD will be a bit faster - the 125MB/sec limit of ethernet/nfs is lower than the 300MB/sec of the computer's SATA interface. But here's the thing - a good SSD costs a lot more than $0!
Add in the buffering and protocol overhead, and you might be faster than a rotating disk (courtesy of the lack of seek time), but by no means will it be like a "freaking fast SSD".
Add in the buffering and protocol overhead, and you might be faster than a rotating disk (courtesy of the lack of seek time), but by no means will it be like a "freaking fast SSD".
I said "freaking fast pseudo-SSD".
Anyway, I use nfsroot over gigabit all the time, and it is far and away faster than a local rotating disk. In real life usage, it feels as fast as a local SSD.
I have also done RAMboot - which is to say I customized the initrd to completely copy the entire OS onto a tmpfs drive. And while benchmarks would be orders of magnitude faster than an SSD, it honestly doesn't feel significantly faster. It just means practically everything is CPU limited. My main reason for using RAMboot was simply cost. That's the same reason I use nfs root to share an SSD among multiple computers also.
The bottom line is that it's freaking fast compared to using a rotating disk, at gigabit speeds.
Speaking of which, RAMboot is another thing that could be done with those computers. Out-of-box, a Debian XFCE4 desktop install will fit on a 4GB thumbdrive, so loading it up entirely into RAM, uncompressed, will still leave 4+GB of RAM for normal use.
But like I said, it doesn't feel all that much faster than a gigabit nfs share in practice. The way stuff gets cached into RAM and writeback is delayed, the performance edge of RAMboot just doesn't make much of a practical difference.
That way, one computer can serve up its 8GB (plus swap on hard drive) as the OS partition for another computer, which can use its 8GB for normal RAM use.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.