ROCK This forum is for the discussion of ROCK Linux. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
12-18-2014, 04:03 PM
|
#1
|
LQ Newbie
Registered: Dec 2014
Posts: 8
Rep:
|
How to use compute-nodes as NAS?
Anyone has a clue how to use all compute-nodes as NAS with kind of virtual "shared" disk? I saw in the menu (after insert-ethers) it has NAS Appliance. Anyone uses that?
Also, I heard that hadoop is also great candidate for Rocks. Anyone did play with it?
Lastly, Lustre is some of interesting stuff for shared FS too.
Any information that help would be appreciated.
|
|
|
12-20-2014, 09:49 PM
|
#2
|
Senior Member
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
|
I like using GlusterFS for Linux NAS. It's free, really trivial to make work with Linux, performs better than NFS, and allows for very flexible distributed data storage.
http://www.gluster.org/
No need for special support in the kernel or dedicated filesystems. Just create subdirectories anywhere you want data to be stored, tell gluster those subdirectories are "bricks", turn one or more bricks into a "volume", and use CIFS to mount the volume on all of your compute nodes.
|
|
|
12-21-2014, 10:41 PM
|
#3
|
LQ Newbie
Registered: Dec 2014
Posts: 8
Original Poster
Rep:
|
Thank you!
How does it use node's hdd?
|
|
|
12-22-2014, 01:58 PM
|
#4
|
Senior Member
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
|
It depends on the kind of volume you make from the bricks. The upcoming release will support Reed-Solomon, but in the meantime it is limited to "mirrors" (like RAID1), "stripes" (like RAID0), and "stripes of mirrors" (like RAID10).
So if you have four computer nodes: A, B, C, and D, and on each of them you mkdir /data and tell gluster to make /data a "brick", you will have four data bricks: A-/data, B-/data, C-/data, and D-/data.
If you create a stripe volume "/space1" from all four, then some data will be written to A-/data, other data to B-/data, other data to C-/data, and yet other data to D-/data. Thus, if you write 100MB to file /space1/foo, then about 25MB of "foo" will be written to each brick. Reading "foo" will read different data from each brick concurrently. If you lose one brick, though, you will lose the entire volume.
If you create a mirror volume "/space2" from all four bricks, then the same data will be written to all four bricks. This makes for slow writing, but fast reading, and you can lose any number of bricks and still not lose your volume, as long as you have at least one.
If you create a stripe of mirrors volume "/space3", where (A-/data, B-/data) is one mirror and (C-/data, D-/data) is the other mirror, then some data will be written to A and B, while other data will be written to C and D. In this case, when you write 100MB to file /space3/foo, then 50MB of foo will be written to A-/data, the same 50MB will be written to B-/data, and the other 50MB of foo will be written to C-/data and D-/data. In this way reads and writes are fairly fast, and you can lose up to one of A or B, and one of C or D, without losing the entire volume.
Also, if new nodes come online and you want to create bricks on them and add them to your volume, GlusterFS allows for this. If, for instance, you have a 21-brick volume organized as seven three-brick mirrors in a seven-mirror stripe, you could add another three-brick mirror to make an eight-mirror stripe, and then tell GlusterFS to "rebalance" the stripe (which may impact performance, but does not require the volume to be taken offline). In this way you can increase the amount of space available in the volume over time.
Each brick's data is stored in the directory they were made from (/data, or /var/brick, or wherever you like).
The next release of GlusterFS will support Reed-Solomon redundancy, for RAID5-like and RAID6-like organizations of bricks into volumes. I haven't tried it yet, but look forward to doing so.
|
|
1 members found this post helpful.
|
12-22-2014, 02:06 PM
|
#5
|
LQ Newbie
Registered: Dec 2014
Posts: 8
Original Poster
Rep:
|
Wow Great. Thank you for the detailed explanation. Will play with that. So I suspect when say some node is offline (e.g. maintenance), how GlusterFS will act then in mixed configuration?
|
|
|
12-22-2014, 03:21 PM
|
#6
|
Senior Member
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
|
When you take a node offline, then if the volume has enough redundancy (via mirroring), the volume will continue to be usable. There is a commandline tool ("gluster") which can tell you which bricks are missing from a volume. Remote users accessing files on the volume can keep accessing them like nothing is wrong.
See this blog entry for an example of the process for taking a brick out of a mirror and putting another one in its place:
http://blog.angits.net/serendipity/a...-is-replicated
|
|
|
12-22-2014, 03:34 PM
|
#7
|
Member
Registered: Aug 2012
Location: Ontario, Canada
Distribution: Slackware 14.2, LFS-current, NetBSD 6.1.3, OpenIndiana
Posts: 319
Rep:
|
There is also Ceph + ZFS
Pros:
RAIDZ2
Deduplcation
Self-Healing (no more bit rot)
Cons:
Needs alot of RAM
|
|
|
01-09-2015, 12:24 PM
|
#8
|
LQ Newbie
Registered: Dec 2014
Posts: 8
Original Poster
Rep:
|
I am going to play with BeeGFS (formerly Fraunhofer). Anyone did use it in Rocks?
|
|
|
All times are GMT -5. The time now is 02:34 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|