Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Hi. I work at a public library and presently "manage" our network. We have 6 systems that are for public internet use, 2 systems for public word processing use, and 5 systems for catalog access (basically a small set of webpages need to be accessed).
Right now we have a 2003 Server with XP clients, in which we use GPO to limit user activities, etc. This works, but not well at all.
I have recently looked into implementing Linux for our public computing solution. I use Linux at home on my laptop, but have NO idea how this would be implemented in a network type situation. Basically I want to have these systems as simple as possible, with only the features we want available.
I would like to be able to have the computers update on a schedule defined in a central location, be able to shut the computers down all at once, or as I choose.
I have looked into Userful DiscoverStation. Would this be a better solution than tweaking some other software package?
Sorry for the plethora of questions. Any kick in the right direction is greatfully appreciated.
There are several ways to do what you want (that's the beauty of Linux, right?). The easiest way is to install Linux on each machine with a default user account and setup autologin. By setting up an autologin, you can restrict who has access to the console, and it limits hacking. For web access machines, you can run a proxy server like squid, and it will limit the content that user can get to (like NetNanny).
If you really want to make administration easier in the long run, it will take a lot of up front work, but you can setup each system to network boot and go from there. This takes more planning upfront, and a lot of server side configuration, but in the end, you have one setup to update, as the other systems will get new updates simply by rebooting.
Unfortunately, the magnitude of your project is a little much to really get online just through this forum. It will take a lot of planning and administrative know how (true for any OS).
You could look at perhaps using a thin-client setup. Basically, the server is the only "real" computer, the clients are just using it's resources etc over the network. That would centralise everything just nicely for you Ubuntu also has thin-client technology integrated into their OS with some purty GUI admin tools and stuff, it might be worth having a look at and it might have a lot of the hard work done for you. Not sure never tried it.
Thanks for the kick in the right direction. I realize this is going to be an undertaking, just as setting up our 2003 server was, and I realize it's not going to be a 3 step process underlined in this forum. However, because I don't have a lot of networking experience (outside of what i've described, and using OS X, Linux and XP on a network and sharing at home), I am not really familiar with ANY of the technologies to do this.
Note that this does NOT have to be a completely free solution. I had been looking at Red Hat Enterprise setups, and Novell SuSE Linux setups... are any of these subscription/pay distributions worthwhile in making my life easier if we choose to pursue a Linux network, or would I do the same amount of work and setup with a FREE distribution?
Here is a place to look for integrating Linux into a Windows network. As you will see it is a career in itself. The thrust of it is that a Linux machine can participate in a Windows AD domain in the same way that an NT4 machine can. If you are using the older Windows network protocol then the Linux machine can fully participate in the domain, including using the Linux machine as a domain controller.
I don't know for sure but I don't think using a distro like RHEL would make any difference. With pay distrobutions like that, the only bonus you get is official support, not specialised apps to make this sort of thing easier. If you want to try RHEL for free, download CentOS. it's RHEL with the Redhat logos removed, all perfectly legal under the GPL.
It won't be easy but the good thing is there are a lot of people here who are willing to help and a few have probably done it before. I'd suggest doing some research, pick a technology and then giving it a go somewhere. Boot them all from live CDs and practise sharing an X display over a network
Well, I have had little luck finding any additional information on this topic. I don't really even know what to search for. Would I be looking for an appication server? Here are two models that I am presently thinking of.
1) Each client has the OS installed on it. Somehow the client computers authenticate with the server (I assume LDAP?) which gives them a permissions number (the word is escaping me now... something like 0500). Each of the client servers would have the necessary applications, and the user would only be allowed to run the programs that I allow. Somehow updates would be pushed to the clients, or the configuration to make them do it themselves would be pushed. The part I'm confused about this method is how you determine which programs can and cannot be run. Is this pure program permissions, or what?
2) Each client has a VERY basic OS installed on it. Somehow the applications are launched from the server, such as that only ONE copy of the program needs to be present, and only one set of permissions needs to be set.
Any suggestions which is better, etc?
Thank you once again for helping me foray into a new frontier (for me)
Alright, resurrecting this thread from the dead after more work-week frustrations with XP/2003
I am taking a new look at the thin-client setup. This seems like a great model, as far as one copy of all software and only one machine to upgrade. However, I have a few concerns:
1) Server dies. What would happen if we came in one day to find our server's HD had crashed, and we were stuck? Realizing that it's necessary to have constant backups, would there be a way to clone the HD, and even possibly only clone the important parts of the HD, and quickly move them to a new machine?
2) Resources. With 15 computers, all running a web browser, some running videos, music, etc., this would probably put a heavy tax on our server, no? It's a 3GHz P4, a very nice fast machine, but definately not a dual-processor server.
3) Computers. All of our computers are fairly new (By that I mean 1GHz+, 256MB Ram+, 20GB HD+)... In other words, plenty of computing power and resources for each user. I'm afraid that by cutting each of these computers' advantages out, and making them run all off of one server, we would be slowing things down for everyone. Unless, perhaps, I am confused. I have been assuming that, with a Thin Client Setup, that the server would be running the apps, and simply forwarding the X data onto the client, who is really just acting like a dumb terminal. Perhaps it is the case that the server forwards the program onto the client, which uses it's own resources to run it?
Perhaps it is the case that the server forwards the program onto the client, which uses it's own resources to run it?
I've never done a thin-client setup but I'm pretty sure you can do it this way. I don't have the money for enough hardware The answer to the crashing hard disk is simple. Buy another and use a RAID mirror. The two disks (preferably the same size) will appear as one disk so it's like having half the space (bad) but if one drive mysteriously dies you won't experience any downtime, just a note in your logs letting you know it happened so you can buy a replacement drive and rebuild the RAID tree on it while the library is closed
You could have the system partitions mounted located on the server. This would centralize the installation and upgrading, but wouldn't tax the server as much because it would be working as a fileserver and not have to execute the code itself.
I think that this might require that the hosts be identical. If not, perhaps the /etc and /lib and /usr/lib partitions should be local. Having the /tmp and /var local should is a no brainer to. I would recommend reading the Linux Filesystem Hierarchy Standard for ideas on which partitions can be shared/static partitions. ( Can be found on the www.tldp.org website. ) If done correctly, and if the hosts are similar enough, then you could perform security upgrades on the central server, and the programs would be updated. Also, if a web browser is the only software that should be running, you might want to google for "linux kiosk".
Also consider using a web proxy like squid to control access to the web.
I had read about libraries that use a thin client solution. One problem they have is with printing. Printing to a gdi printer, for example, overtaxes the network. A program at a thin client terminal is actually running on the server. The server is the X-windows client, and the terminals are the X-Windows servers. So, the graphics are traveling down the network. I think that for normal office applications like database terminals or word processing, the thin wire solution would work better than in your case where the users are web-browsing.
One other thought is that you may want the public terminals kept separate from the network that the library uses for its normal work. Being connected to the internet presents dangers. Especially if the public terminals are using windows hosts. Someone picking up malware on one of the public terminals could turn it into a zombie computer which could attack the other hosts.
Since you have a small number of workstations to deal with, and almost no "user accounts," you can simply set up a standard Linux distribution on each machine. It should be a minimalist configuration, with only the applications you intend the public to have.
Each workstation automatically logs-in to a single, very limited account. It boots and goes straight into that account and cannot go into any other. Ctrl+Alt+Del and other key-sequences are disabled. XWindows also ignores magic-sequences like Ctrl+Alt+Bksp.
When a user session ends, it logs-out, and is immediately logged-back in again, fresh and new. The critical files in the user's home-directory are created by another user and are read-only to this one. Upon login, all other files are erased.
So, how do you log in? Through an alternate runlevel. Booting to any other runlevel, or changing the boot-sequence in any way, requires a password. The BIOS ignores any other boot-device, again without a password.
I think that what sundialsvcs was referring to as to files being auto deleted, is that you would have the home directory created on bootup in memory (ramdisk). Rebooting would of course completely flush this out. This could also be accomplished with a login script, that would just "rm -Rf ~;cp -Rfd /etc/skel ~" (as an example).
As to restricting which programs a user can run, I would assume that you would create a user menu with only the programs needed (OpenOffice, Firefox, xine, etc). I don't think users would need shell access, so there should be no terminal windows. I'd use either Gnome or KDE as the gui backend for interoperability and ease of use (they can both be configured to match more well known environments, like Mac & Windows, plus they are both highly configurable).
If the thought of having all the systems running off the server is too complex, you could build an image on one system (call it a gold image), then use partimage to save it to the server to replicate to other systems. This works especially good if all systems are esentially the same basic hardware (same video vendor, network, ide controller, etc). In other words, if all the "client" machines are running the same base video (nVidia GeForce 2mx or better), the same type of hard drive (IDE with a ribbon cable - aka PATA), and the same basic monitor settings (1024x768 @60hz for example), then you take the lowest system in the pool to build the master image, and it should work on the rest without a problem. You could even get really advanced, and have them auto-reimage nightly if you wanted to. This way though, you only have one system to upgrade the image on, then propigate to the rest.
Also, streaming video over 100mb ethernet is fine, unless you are doing HD quality video. Most internet video is either 320x200 or 640x480, so it should work no problem. Use network switches instead of hubs if possible, as they will perform far faster (full duplex vs half duplex, etc).