Server Hardware Overhead--GUI vs. CLI(Command Line Interface)
These days I have a home network which supports from one to six users. I have a print server(Windows XP Professional, SP3) and a file server(using "samba" and "Windows shares") which is also configured to act as the "Browse Master" for this workgroup. This is currently a "peer-to-peer" network and there is Domain Controller and I have no current plans to introduce one. My home network is named "Paleozoic". And it includes both Wi-Fi(IEEE 802.11g) and Fast EtherNet(IEEE 802.3u) segments.
Everything is working well now.
The File Server(System Iguanodon) is a "legacy" machine running Ubuntu Server 12.04.1-LTS with a Pentium 4 mcp turning over at 2.4-GHz. The system has a 160-GB pATA hard drive which is currently 3.5% occupied and the system has 1-GB(=1,025 Megs) of RAM. Like the Server versions of Ubuntu, it has no GUI, nor an immediate need for one. I am familiar with the bash Command Line, as implemented under current versions of Ubuntu Linux.
At times though, I have come across operations which would be "dead easy" using a GUI such as "Unity"(the default GUI for Desktop versions of Ubuntu Linux 12.04 or 12.04.1), but which are, at best, more difficult using bash commands. (Such as adding an additional network printer to this particular machine.)
My question is:
If am planning to do a new install of a Desktop version of Ubuntu Linux on the same hardware, can I estimate what the impact on the server response time or latency be before I actually perform it?
I suppose that an alternative wording would be:
How much CPU time or memory does a GUI like 'Unity' eat up and can it be estimated beforehand?
A "stooge response" would be a suggestion to the effect that I should make the change(install the Desktop version of Ubuntu) and then see the effect on resources and CPU time using the Ubuntu System Monitor. (Could we avoid that, please?)
"My problem" is I think that I should have at least a "rough-and-ready" estimate before I make this type of change.
Unless its available as a live disc the best option would be to install & test it out. If you have an extra HDD lying around you can swap them out install Ubuntu & do your tests then compare the specs to see for yourself. Maybe even add a HDD & dual boot CLI version of Ubuntu & GUI version. A little tedious but the choice is yours, at least thats what i would do.
Hope this helps
You might also consider some alternatives to running a GUI full time on the server:
- Run X applications remotely. You need a functioning X environment locally, but then when you establish an ssh connection to the server, enable X forwarding and run the application on the local machine. It can be a bit pokey depending on the connection, but you aren't running X on the server.
- Start X only when needed. So ssh into the server, start a VNC session (tunneled over the SSH connection of course), do your work, then shut down the VNC session.
- Look into a lighter weight solution like webmin. It probably is less of a resource hog than running X all the time.
-If you do all your admin work at that machine, run it in console mode, then telinit to a GUI only when you absolutely need to. Shut it down when you're done.
I agree with trying to use it only when reqd, but I'm not sure whether telinit eg telinit 5 would do a full runlevel switch ie stop / start some process so it actually goes from runlevel 3 to runlevel 5.
I reckon you should be able to run system at level 3 (assuming that's the same for ubuntu) and then run 'startx' when you need it, then logout should close it I think.
In more direct answer to your qn; I doubt anyone can tell you how much overhead there is unless they've done it themselves; not many do.
I would say that even if you always have the GUI avaialble, like any program, if its not actually being used (ie you are not sitting there mous-ing around etc), then most of it will be swapped out, so the overhead would/should be minimal at that point.
|All times are GMT -5. The time now is 10:37 AM.|