Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
WSUS stands for Windows Server Update Services. This service enables centralized/organized/selective updating of desktops from a central point in the network. The Windows server basically downloads the updates from Microsoft, then distributed the updates to Windows desktops in the network whenever and wherever appropriate. The service is most useful if your enterprise has a limited Internet bandwidth and the internal network could better support the load.
I am wondering if there is already a similar open source project on Linux, though I doubt because I rarely see large desktop deployments of Linux.
I currently see creating a yum repository as the most feasible solution to this. Linux desktops' repos would point to a single server (say CentOS) on the network where they would pull their updates from. Then, that CentOS server would pull the updates from CentOS's repos.
But this solution works the opposite way that I want. I want the server to 'PUSH' the updates to the clients, so no actions would be required from the end users, something like a silent installation.
So, any ideas? or maybe point me to some better solution to this?
You can download the files into a centralized server (CentOS fo rexample) with scheduled crontab, then rsync the files from that server to the clients.
Then on the clients you can install the updates with yum locally from crontab.
This could do that what you wanted.
Or the other solutions for this to create a local repo with "createrepo" command from the downloaded rpm packages, then update the client just only from your local repo.
Of course this is the "opposite" way as you mentioned it.
There are a number of ways of creating something that functions quite like what you seem to want: most distros have some kind of updater that goes out and checks for updates, all you then need is to cache those updates, and you have achieved the bandwidth-saving aspect. It doesn't really need much user activity, if the updater updates automatically, but you are in a bad place, if you get a bad update.
A slightly different flavour is to create your own repo from packages that are 'known good'; in this case you should never get a bad update to a critical app, but the admin overhead is likely to slow things down somewhat (bet you'd still be faster than MS, though).
Don't like those? You might want to have a look at provisioning (eg, here, particularly specwalk/satellite), although most of the provisioning projects are mainly directed at servers, that doesn't mean that, with a little effort, you couldn't get them to work for you.
This and also here, looks interesting; it combines some element of single sign on with provisioning, but whether it can do updates as well as initial, bare metal, stuff is difficult to tell.
I have a suspicion that the real way to do this is to use something like puppet, but whether there are any modules for your particular OS or you would have to write your own remains a mystery.
if you create a local repo on your server, couldn't you do something like ssh user@client yum install whatever ?
or are we missing something ?
no, it was me, I'm just isn't that familiar with yum and what it can and cannot do. i'll try to search more on its capabilities, then maybe develop a graphical application for it. Thanks all!
There are a number of ways of creating something that functions quite like what you seem to want: most distros have some kind of updater that goes out and checks for updates, all you then need is to cache those updates, and you have achieved the bandwidth-saving aspect. It doesn't really need much user activity, if the updater updates automatically, but you are in a bad place, if you get a bad update.
A slightly different flavour is to create your own repo from packages that are 'known good'; in this case you should never get a bad update to a critical app, but the admin overhead is likely to slow things down somewhat (bet you'd still be faster than MS, though).
Don't like those? You might want to have a look at provisioning (eg, here, particularly specwalk/satellite), although most of the provisioning projects are mainly directed at servers, that doesn't mean that, with a little effort, you couldn't get them to work for you.
This and also here, looks interesting; it combines some element of single sign on with provisioning, but whether it can do updates as well as initial, bare metal, stuff is difficult to tell.
I have a suspicion that the real way to do this is to use something like puppet, but whether there are any modules for your particular OS or you would have to write your own remains a mystery.
Thanks for all the links, I'll try to read on those, though I think provisioning might be a good option as well.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.