SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hey guys! I have a question and I hope you slacker can answer me and tell me about your experience!Basically, I have 4 machines in my (homelab) running Slackware and 3 VPS running Debian.
I have been manually updating all 7 of these computers manually for over 5 years, as I like to see what is being updated. Nowadays it's complicated for me, as I have 2 small children and my time is very short!
I'm thinking about automating the update with a shell script, wrapped in a for loop.
Or even (Ansible). My experience with Ansible is minimal. What do you think? Do you suggest anything to me? If you suggest shell scripting to me, what features could it have?
And you could test for a kernel upgrade afterwards, and conditionally do
/usr/sbin/grub-mkconfig -o /boot/grub/grub.cfg
(or the equivalent mkinitrd/lilo commands) and
shutdown -r now
as well.
You would still need to review the output so that you could respond appropriately when there is a failure. And even when there are no failures, you might still take time to to look at htop output to see if you have running services with updated executables or libraries. And you would want to know enough about booting to get around unexpected boot failures. And have some kind of deadman switch that timely lets you know when a kernel or service goes down and stays down. And have good backups. Etc., etc.
This is not something I've tried, and won't, partly because I prefer to deal with upgrade problems immediately, and partly because I don't have enough systems to manage that the time saved would be worth the trouble.
Ansible is awesome and is perfect for this type of situation, but if you don't know how to set it up then it will probably be less time consuming to just use bash scripts and cron to update your systems. for only 7 nodes that is probably going to be the easiest way. Also Ansible doesn't have a great module for dealing with Slackware so in the end if you did go the Ansible route then you would probably end up using the shell module to use bash scripts to update Slackware, which is basically putting a hat on a hat. In the end Ansible really is made for this type of situation its just a matter of whether you want to go through the hassle of setting it up. For one thing I personally would not use Slackware as my Ansible control machine. I would use a Debian or RedHat based OS for that. It can be done, but its just not how I would do it.
If you don't mind stepping off the beaten path you could also checkout my slackscan/slackup tools which I wrote specifically with non-interactive/scripted use in mind. They won't be any use with the debian systems of course, but then on debian you have 'apt' to do it all for you anyway.
I have been in this position personally and professionally. With multiple Slackware systems my approach is maintain local repo mirrors. That way there is only one download of all packages. Probably still available but AlienBob has some rsync shell scripts to maintain local mirror updates.
Same idea with multiple Debian systems -- look into a tool called apt-cacher-ng. Download once and update everything locally.
One way or another, configure all local systems to use local cached packages. With slackpkg this is a single change from an online mirror to the local mirror. The slackpkg tool supports automation parameters.
Next is to decide how to automate updates. About two decades ago I wrote my own sync script, long before tools such as ansible became popular. I still use that script. Basically systems update automatically but only if the primary home network system has been updated. I have a single Current virtual machine and that system updates automatically, but I tend not to launch the system until after reviewing the change log.
When I was an admin I configured Debian workstations and laptops to update automatically with cron jobs, but only after business hours. Work policy was that no updates would be allowed to interfere with users. With servers I only allowed test systems to automatically update. All other servers were updated manually after reviewing the change logs. Duplicate servers such as name servers and virtualization hosts were updated one at a time with 24 to 48 hours in between updating each system. That way systems and logs could be reviewed for issues.
Also note that Pat is not a card-carrying member of the PR kernel-of-the-week-update-now! club. Many if not most security issues are based on physical access to systems and exotic usage scenarios. The lesson is that keeping systems updated with patches is important but for most people can be done at a sane and healthy pace. When a security issue arises that is a systemic exploit, be sure there will be many related online articles discussing the exploit. For most home users all systems are behind a router and nominal firewall and the risk of external hacking is low to non existent.
How dangerous is automating updates? After more than 23 years of using Linux systems I probably can count on one hand how often a package update bit me. I have been bitten though.
It is important to consider running a stable release if you are going to automatically update Slackware. Slackware-current sometimes requires further action prior to updating your system. You can also use the blacklist feature of slackpkg or rsync to exclude kernel updates- then do those manually at a later date.
Short answer no. << OOPS , I should have said YES. Thanks Petri Kaukasoina.
The excerpts from the manpage points this out.
From the manpage of slackpkg.conf:
Code:
BATCH
Enables (on) or disables (off) the non‐interactive mode. When run in batch mode, slackpkg
will not prompt the user for anything; instead, all questions will get DEFAULT_ANSWER (see
below).
If you perform an upgrade using this mode, you will need to run "slackpkg new‐config" later
to find and merge .new files.
DEFAULT_ANSWER
This is the default answer to questions when slackpkg prompts the user for some information.
This is used only in non‐interactive mode (when BATCH is "yes" or the user turns batch mode
on via the command line); otherwise, this variable has no effect.
Valid values are "y" or "n".
Last edited by chrisretusn; 01-30-2024 at 11:06 PM.
Hello All * Linuxsl , The man page has this to say about 'new-config' .
Yes , Both the 'BATCH' & 'DEFAULT_ANSWER' Options will leave the OLD configured conf files unchanged .
But to my reading of this portion of the manpage leaves ALOT un-resolved as to What it does do when called .
Without a VERY GOOD definition of what it does do , I'll not sacrifice my many years of maintained .conf files to such a tool .
Code:
new-config
This action searches for .new configuration files and ask the user what to do with those files.
new-config is very useful when you perform an upgrade and leave the configuration files to be reviewed later.
Instead of a manual search, diff, and replace; you can use the new-config action.
new-config searches /etc and /usr/share/vim for new config files.
The 'slackpkg new-config' command is benign unless you explicitly request otherwise. As an example, I have created a /etc/ntp.conf.new file. Running the command causes that file to be detected and prompts for the action to be taken.
I almost always choose P for prompt.
I then often choose (D)iff to see the changes. If the incoming file lacks local customisation, then I will choose (R)emove to remove the incoming file. If the incoming file includes small changes (e.g. typo fixes), then I will choose (O)verwrite to replace the existing file with the incoming file. If the incoming file has many changes, then I will choose (V)imdiff and edit either the incoming file or the old config file as necessary before using either (R)emove or (O)verwrite as appropriate.
Code:
root@darkstar:~# slackpkg new-config
Searching for NEW configuration files...
Some packages had new configuration files installed (1 new files):
/etc/ntp.conf.new
What do you want (K/O/R/P)?
(K)eep the old files and consider .new files later
(O)verwrite all old files with the new ones
(R)emove all .new files
(P)rompt K, O, R selection for every single file
p
Select what you want file-by-file
/etc/ntp.conf.new - (K)eep
(O)verwrite
(R)emove
(D)iff
(M)erge
(V)imdiff
Tip - When using vimdiff, you can use Ctrl-ww to switch windows, dp to put a block of different lines into the other file and do to obtain a block of different lines from the other file. If things go wrong, use the universal bailout of Esc:qa! which will exit leaving the two files unchanged.
I do the same as @allend. After awhile you should now what files you have modified (P)rompt or (K)eep and not modified (O)verwrite. By default slackpkg will save the original overwritten file with the extension *.orig.
Code:
ORIG_BACKUPS
During integration of .new files during the post installation phase, original files are
backed up to a file name with a .orig extension. To prevent this, set this option to "off"
and note that you will no longer have a copy of the content of the file prior to it being
replaced by the .new version.
The default value of ORIG_BACKUPS is "on". Only change this if you are sure you don’t want
backups of overwritten files.
From the command line, you can use ‐orig_backups=value.
@ALL , Babydr Thanks you all for previous insights & knowledge transfering .
@chrisretusn , I am going to open ONE more topic on slackpkg . I beleive that you remember this thread ... https://www.linuxquestions.org/quest...light=slackpkg
And the other threads around the same date (ie: 20120406 - 20210610) .
My script mention in the above is still in use . Thanks to chrisretusn .
I am in hopes of finding out some more items that may not be available or recommended , But of which I'd like to be availed of for the script . In light of the items brought up here .
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.