Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have been scratching my head over this problem for a couple of days and have yet to come up with a elegant working solution.
I want to set up a network with redundant home directories for all the users and software, so that if the machine with the relevant disks goes down for whatever reason, there is a fallback that can take over.
My original idea was to set up two NIS master servers running on the same domain with a copy of the home directories on each machine. My reasoning was, that if I give each server a different passwd map pointing to the local home directories, fallover of one of the machines would cause the clients to use of the other master's maps, resulting in business as usual (home directories are sync'd on a regular basis).
I also tried the same thing, but with changes in auto.home, so the automounter maps from the different servers would use the same mount point for different home directories.
Neither of these schemes worked. Somehow the clients remember the last maps they used. Example, I am bound to NISmaster1 for which the user Joe's home directory is on /home1/joe. I restart the client binding to NISmaster2 where joe's home is at /home2/joe. The maps from the new binding does not overwrite the setting on the client, i.e. if I type cd ~joe, I still go to /home1/joe. Same thing for the autofs maps.
I suppose this is not so surprising, but it leaves me with little idea on how set up a backup file server without some kind of cluster-like heartbeat.
Open to alternates not necessarily solutions?
I prefered not to generate network traffic by having all users' home directories on a centeralized (NIS) server. So taking a path less traveled:
On NIS server set up each user such that their home directory is /home/users/[joe,jim,username]
On NIS server map /etc/auto.master & /etc/auto.home = [* NISSERVERname:/home/users/&] (adjust for security)
On workstation user (joe) normally sitsat set up 'joe' using same uid# as used on NIS server such that his home directory is /home/joe
On workstation complete NIS setup ie /etc/passwd = +:::::: etc
On workstation start autofs
On workstation find a method to sync all /home directories to the NIS server /home/users directories; could be cron job or .bash_login/.bash_logout scripts(requires init3 cli)
User sits at his usual computer he uses file on that computer and per method established newer files are synced back to server.
User sits at another computer he uses files from NIS server.
Some programs (opera) do not utilize $HOME variable and have hard wired download etc directories.
Hope this stirs the grey cells for your solution.
Thanks for that. Your method certainly has the advantage of limiting congestion when the server is being used for other purposes like file backups. My only problem with it is that I have most large software installations centralized on a single file server and I don't want to maintain more than 2 copies of this.
Turns out though, if you leave a client long enough, it will in fact update to the autofs and passwd settings of the fallback master (you have to wait for the autofs mounts to expire by themselves). I only had real problems when trying to work on the fallback master during failure of the primary. In my naivety I had the fallback master bound to the primary master during normal operations and the switch from the original binding to the localhost just doesn't seem to work very well. Now I simply have the fallback master's client bound to itself (like in your method) and everything seems just peachy.
Next up, link aggregation. Has anyone done this in the Suse 9.3 environment? The standard documentation doesnt look anything like the new Suse network configuration scripts, so I am a bit hesitant to just jump in with vi.