Is it advisable to run yum -y daily via a cronjob on a critical production server?
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
You can help minimize your risk of an update causing issues by only installing security updates on productions servers. This mean NOT installing bug/feature updates.
Only install bug fixes if needed.
To make yum only install security patches do the following:
1. Install the yum security plugin by running 'yum install yum-plugin-security'
2. Add the security option to yum update command like this 'yum --security update'
So, if you do NEED to automate your updates (THIS IS NOT ADVISABLE), installing security only patches will help minimize your risks.
BTW, I only install security patches on all my production servers as a rule, even tho I run them manually.
... but, even then, you might be "relying too much" upon what some distro-vendor considers to be "a 'security' update."
The single most-important attribute of any production server is: "absolute continuity of service." It is notokay(!) to update the software that performs that service, "any ol' time a distro-provider wants to." You must plan(!) for these things.
While I agree with you, that's not the whole story. I did not go into how I deal with the issue of continuity. In my environment, I take snapshots of anything I'm going to run update's on. This is beyond our normal backups, test environments, etc. that we have in place. Having said that, I must, at some point, either trust the updates, exhaustively scan thru every line of source code (windows source code??? ..... anyone???) or not run the updates. I have found that even running updates in a test env. doesnt always catch all the issues one can run into. Fortunately, Redhat is very good concerning its security updates (I dont think I've ever had a RH security update brake something) and snapshots allows me to run updates then hit the rewind button if something brakes. ...... usually :}
I think the clue to answer is this "a critical production server".
You are really in a tough boat on this. You NEED to keep it current for threats but you also don't need to bork it with a faulty update of some junk program.
Best practice is to look at updates and decide or get management to issue a rule.
One clever "shop" that I know of puts all "system-related" files onto a single volume (image). Every part of any of their "standard server-configurations" which is not "malleable user data" is stored on that volume.
There are three virtual-disks associated with each (virtual ...) production server: "previous," "current," and "next." All of them are read-only to the server in question.
Regularly, they apply system updates using an offline master machine, then run a series of production-readiness tests which gruelingly examine everything that is of operational importance to them.
If everything is well, the changes are replicated via rsync onto the "next" images.
A server can then be shut-down, the three disk-images are "rotated," and the server is restarted. The previous "last" image becomes the "next" one.
And, in the case of "extreme unction" , they have one, if not two, "fall-back positions" to drop to.
One clever "shop" that I know of puts all "system-related" files onto a single volume (image). Every part of any of their "standard server-configurations" which is not "malleable user data" is stored on that volume.
There are three virtual-disks associated with each (virtual ...) production server: "previous," "current," and "next." All of them are read-only to the server in question.
Regularly, they apply system updates using an offline master machine, then run a series of production-readiness tests which gruelingly examine everything that is of operational importance to them.
If everything is well, the changes are replicated via rsync onto the "next" images.
A server can then be shut-down, the three disk-images are "rotated," and the server is restarted. The previous "last" image becomes the "next" one.
And, in the case of "extreme unction" , they have one, if not two, "fall-back positions" to drop to.
I like this idea a lot. Would like to know more about this procedure.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.