Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I've got a cluster of 7 computers, and I have three related issues. I would like to share some common settings between them, but leave some of the settings local. For example, I would like to share /etc/hosts (file) and /etc/ganglia/ (folder) from a server to all the other machines. I though of maybe using NFS, but I want to leave files (/etc/modprobe and /etc/rc.conf which are hardware depended) local.
I would also like to be able to have the same user settings between all machines. I would also like to have the home folders the same. I could do NFS shares, but I'm not sure how that would effect my ssh keys... Thanks for any help!
In my case, we use some scripts that do scp to the needed nodes.
You can make ssh keys so that nodes do not ask for password and scp your config files all over the nodes just with simple 'for i in nodes' ...
How about using links for those local setting files?
I mean: put your local settings somewhere in local file-system and use link to them.
Example:
ln -s /local/modprobe /etc/modprobe
As long as every node has the same path (/local/modprobe), I believe this will work when you mount /etc from NFS.
But I think mounting /etc from NFS is a bit risky. Why don't you synchronize /etc and skip all local setting files?
Yeap, I think its best you use links together with NFS. Your file links will then point to the files in nfs share. Alternatively, you may want to build a repository of the files that you want to synchronise, and then use something like rsync. If your cluster was using a clustered filesystem, then you would not have a problem.
Last edited by chitambira; 04-20-2009 at 04:27 AM.
Linking all the NSF files is actually a pretty good idea, however setting that up one 7 nodes would take some time. I like the idea of using scp, but I was wondering if rsync would be better (I know rsync can use ssh). I've used rsync by inself, but I also see it has a daemon mode. How does the daemon mode work? Anybody know a good reference or tutorial?
Also, any ideas on sharing users? I've heard the words LDAP authentication before, but I don't really know if this is what I'm looking for.
In Debian you can install with apt, then create a the config file /etc/rsyncd.conf like this (it's a simple way)
In this case I can't say which machine acts as server and which acts as client...
Another tips:
- installing DNS @ server maybe better than using hosts file.
- for managing/copying files to nodes you may use distributed ssh, e.g. rgang.
btw. 7 nodes is not so large, a normal copy command would suffice.
Thanks guys! I think using rsync with ssh/cron is probably just as efficient as using rsyncd.
True, copying files between 7 nodes (or even 70 nodes) would be that bad, but I was refering to linking all the config files using NFS (see abouve). That would probably require me to make 70+ links, which would take way too long.
I also like the idea of using a DNS server. I have a middlebox running DNSMasq (which does have DNS support) but I think I'm going to switch it to DHCP and DNS.
I think I'm going use LDAP authentication instead of NIS. NIS is kind of out of date and I've always been curious about LDAP. I found a good tutorial here... http://wiki.archlinux.org/index.php/...authentication.
I have just one last question. If I where to share my home folders between all 7 nodes, each with the same .ssh folder, would ssh still work? I suppose all thats in there is known_hosts, so I suppose it would.
I have just one last question. If I where to share my home folders between all 7 nodes, each with the same .ssh folder, would ssh still work? I suppose all thats in there is known_hosts, so I suppose it would.
This way ssh will definitely work. It is even better, since you will have the same id everywhere. You only have to do this once:
cat .ssh/id_rsa.pub >> .ssh/authorized_keys2
then you will have password-less login to every nodes.
There will not be any problem with the known_hosts file.
Last edited by biggerbug; 03-04-2009 at 08:35 AM.
Reason: missing sentence
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.