LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices



Reply
 
Search this Thread
Old 10-02-2011, 02:06 PM   #1
KuimFieg
Member
 
Registered: Sep 2011
Location: France
Distribution: Debian Squeeze
Posts: 32

Rep: Reputation: Disabled
Your thoughts on shared/synced homes? NFS, AFS, rsync, puppet ?


Hi,

I'm looking for people's experience on this subject.
I'm trying to centralize my (home) computers in an effective way.

My current setup is :
  1. A server
    • DNS
    • Kerberos Master Server
    • Kerberos KDC (user/service auth management)
    • OpenLDAP (user info, uid, gid, shell, etc)
    • Kerberized NFSv4 (exporting /home with "privacy protection", i.e. encrypted)
    • SSH Server (public key authentication)
    • X2go Server (WAN Remote Desktop Access)
    • PuppetMaster (configuration/package synchronization)
  2. The clients
    • Logon granted by Kerberos server
    • All accounts info provided by OpenLDAP
    • NFS Client is then authorized access to server:/home
    • Kerberized SSH Server
    • ClusterSSH Client to administer all nodes at once
    • Puppet Client polling for configuration changes

Overall thing are working pretty well. My main concerns are :
  • Laptops may become unstable or lock (kernel panic) if wifi signal drops.
  • Server is connected via Power Over Ethernet (200Mb), which turns out to be not 100% reliable, causing all clients to lock occasionally.
  • conflicting user info such as ~/.ssh. I think I can get around the problem with the "AuthorizedKeysFile" variable.
  • Certain apps such as chromium or firefox react slowly due to bandwith hogging. Occasionally, the client computer becomes unresponsive temporarily. I work around the problem mounting over the user dir (for example mount --bind /rsynced/$USER/.mozilla /home/$USER/.mozilla) and rsyncing data from the server to the client. When the program is closed, data is returned to the server. This allows all clients to be synchronized and get local hdd speed (as far as writing to this path is concerned), but it's pretty messy.
  • conflicting user info in other apps such us Puppet. This app for instance stores client-specific ssl certificates in ~/.puppet/ssl. I have to resort to the same trick above or maybe edit the source ? It's anoying, messy and time consuming. There appears to be no way to override the user config path.
  • Copying or generating big files within the nfs path hogs the bandwidth (using VirtualBox for example) and renders the local system unusable until the copying/operation is done.

Things I do really like is not having to redefine taskbar shortcuts, wallpaper, basic apps configuration, etc.

In order to work around the locking NFS problem, I've been considering AFS. Have any of you got experience on that?

I'm wondering today if mounting a shared home is a good idea after all. It seems some apps are just not designed for that. I can see it being really great if you have a gigabit network but it's pretty flaky for laptops. It seems to be a limited solution.
The reason I have not been rsyncing the users home dir so far is mainly because each computer has different hard disk space and I wanted to have the same folder structure everywhere. Bandwith hogging would also be an issue when rsyncing. I would have the same conflicting user info problem which mean it's a lot of work to filter all that and adapt the rsyncing scritps as apps get installed or removed.

What would you recommend ?

For information, my real bandwidth (client<>server) is 10/20Mb (thanks to iperf).

Last edited by KuimFieg; 10-02-2011 at 03:05 PM.
 
Old 10-04-2011, 05:04 PM   #2
KuimFieg
Member
 
Registered: Sep 2011
Location: France
Distribution: Debian Squeeze
Posts: 32

Original Poster
Rep: Reputation: Disabled
I don't know what I had been smoking because Puppet doesn't store user specific data! So that's a non-issue.

Anyways I am testing a package called "cachefilesd".

Basically, it allows to cache NFS on hdd. Sounds interesting since caching is the feature of OpenAFS that gets my attention.
I noted a "slight" improvement when connected via cable. However I had a few issues over wifi, mainly corrupted data and cache + I get the same lockups when the wifi signal is disturbed.

Right now I only foresee a few possibilities :
  1. OpenAFS being a miracle and solving my problems.
  2. mounting over specific dirs (.mozilla, .VirtualBox, etc) with "--bind" at startup, and sync with rsync (as I'm doing).
  3. try to keep a small "/home" dir + rsync from and to the server at startup or logon. A major disadvantage is not being able to work on two logged on computers, simultaneously.
  4. using something like unison but I think it's pretty dangerous.
  5. using a file sync daemon like "lsyncd" or "inosync" but probably tricky to get right.
 
Old 01-10-2012, 05:21 PM   #3
KuimFieg
Member
 
Registered: Sep 2011
Location: France
Distribution: Debian Squeeze
Posts: 32

Original Poster
Rep: Reputation: Disabled
To whom it may be of interest:

I guess it may be strange to reply to oneself in such a way but I'd thought I would give a little feedback for anyone doing google searching or whatever.

Basically "cachefilesd" didn't help me.

But OpenAFS turned out to be amazing! It totally solved my problem. It's been rock solid for 3 months now, without a glitch. The way caching works on this software means I can even reboot the *file server* if I want to, without affecting the clients.

It was quite a pain to setup since this is totally different animal as far as file servers go. Furthermore you cannot take shortcuts since the (current) OpenAFS setup process forces you to implement security from the start.

For information, I had to switch from firefox to chrome which seemed to perform better in that environment. Apart from that everything works great.

I can only recommend to anyone to have a look at this awesome technology.
 
1 members found this post helpful.
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Parameter to execute shell script on puppet client through Puppet server niraj.kumar Linux - Server 3 02-08-2011 10:01 PM
A problem with the Primary Administrator role and pfexec when homes are NFS mounted crisostomo_enrico Solaris / OpenSolaris 5 11-05-2008 07:21 AM
thoughts on restore from an rsync "image" moob8 Linux - Software 4 10-16-2008 10:18 AM
Samba and NFS homes on other server Neruocomp Linux - Networking 0 07-11-2006 01:37 PM
KDE suffers with homes in a nfs directory Clemente Mandriva 2 12-23-2003 03:13 PM


All times are GMT -5. The time now is 10:13 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration