SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Now i know each and every while this emerges:
Just how fast is Slackware and is it worth speeding up it's boot sequence.
I beg bare with me just for a brief moment:
I really don't alter the sequence not a tiny bit;
I merely just postpone few resource-hogs for the time we (hopefully) are on Desktop Environment and wait the browser start or calendar sync or whatever.
So I merely time-shift few (four) particular startup parts for not more than two minutes,and gain "to desktop time" in below a minute on this 10 year old laptop (DDR2 core duo).
So if anyone considers this useful, just run the patch file against rc.M like this:
Quote:
#patch rc.M.patch /etc/rc.d/rc.M
and reboot
if You find it not worthy, just replace the rc.M with the rc.M.orig that the patch utility left "behind".
- I believe the code is self expaining...
- i use this system since 14.0 and had no problem with it so far, the patch published is pulled from 14.2 running.
- regarding official inclusion - most of the postponed tasks can be run after setup, to prime what ever they do, so they won't be missed on first reboot and onward.
Aha. I think this might be an existing bug - backgrounding ldconfig.
It's probably unlikely for x86 but not on ARM where people set up file system's and the OS by editing the files and installing stuff directly into the file system, then booting into it. If they (me as I did many times) forget to run ldconfig -r. within the root fs, then your system won't boot at all, or break some way along the path.
Also even without an /etc/ld.so.cache existing, ldconfig completes within 0.450ms on ARM. It's not worth forking it into the background - it's probably slower!
The others also fall into this category.
Can you remove the backgrounding and nice thing and re-test to check?
Aha. I think this might be an existing bug - backgrounding ldconfig.
It's probably unlikely for x86 but not on ARM where people set up file system's and the OS by editing the files and installing stuff directly into the file system, then booting into it. If they (me as I did many times) forget to run ldconfig -r. within the root fs, then your system won't boot at all, or break some way along the path.
Also even without an /etc/ld.so.cache existing, ldconfig completes within 0.450ms on ARM. It's not worth forking it into the background - it's probably slower!
The others also fall into this category.
Can you remove the backgrounding and nice thing and re-test to check?
Kind thanks for the feedback.
As You can imagine, I did profile the whole rc.M until i found the points that delay the startup the most.
It was on a core duo class laptop with an SATA2 class HDD.
I do imagine Your ARM is an embedded (no HDD) computer with selected only *.so deployment?
Or You tested it on official complete Slackware for ARM and an SD-card/Flash storage?
I can imagine all the boot-up delay is from the head-seek-wait?
But to make it worse by delaying it? Maybe only if the SSD/flash/SD-card gets confused and likes co-locating writes in time?
Then i guess this optimization would not apply for solid state root filesystem?
I do have an RasPi handy to check this when time permits, not that a "headless" server would benefit of it much...
If you want to have real speed-up, you have to start with hardware. This basically means buying yourself SSD.
In my case, the boot time has lowered from ~60 seconds to just below 20 seconds.
Also, in /etc/rc.d/rc.S, you can in most cases safely comment out "sleep 3" just after "Mounting non-root local filesystems:".
Then you have to be careful with daemons start-up time. For example, libvirtd seems to take quite some seconds to load.
All that being said, most of the time I'm using hibernation. My system resumes in just about 20 seconds, even if I have 2 virtual machines (Windows 7, Slackware 14.2), running at the time of hibernation.
I do imagine Your ARM is an embedded (no HDD) computer with selected only *.so deployment?
Or You tested it on official complete Slackware for ARM and an SD-card/Flash storage?
No, it's the build machines which have the full OS on an SDD and the tty opens as usual - one of them has an HDMI monitor and the others open a tty through the serial port.
The point I'm making is that those commands take so little time to run that there's no point in backgrounding them, in my opinion; but I haven't tested that theory I only timed ldconfig
I suspect that making it foreground makes no _perceivable_ difference though.
@drmozes
1. as noted above, SSD are game changers, agreed. Even MMC/SDcards.
2. I have yet to find a 1TB storage device in SSD that i can afford,...
until then my desktop stays on turning platters, and those four get "backgrounded"
on my place at least
@drmozes
1. as noted above, SSD are game changers, agreed. Even MMC/SDcards.
2. I have yet to find a 1TB storage device in SSD that i can afford,...
until then my desktop stays on turning platters, and those four get "backgrounded"
on my place at least
I just wanted to share if anyone finds it fit...
I get what you're saying, but on an x86_64 with a normal spinning HDD (not SDD) 2.5" disc:
Code:
root@kitt:~# smartctl -a /dev/sda| grep -i 'Model'
Model Family: Seagate Momentus 7200.4
Device Model: ST9160412AS
root@kitt:~# time ldconfig
real 0m0.174s
user 0m0.134s
sys 0m0.039s
root@kitt:~# rm -f /etc/ld.so.cache
root@kitt:~# time ldconfig
real 0m0.148s
user 0m0.130s
sys 0m0.018s
root@kitt:~# time ldconfig
real 0m0.148s
user 0m0.129s
sys 0m0.019s
I get what you're saying, but on an x86_64 with a normal spinning HDD (not SDD) 2.5" disc:
Code:
root@kitt:~# smartctl -a /dev/sda| grep -i 'Model'
Model Family: Seagate Momentus 7200.4
Device Model: ST9160412AS
root@kitt:~# time ldconfig
real 0m0.174s
user 0m0.134s
sys 0m0.039s
root@kitt:~# rm -f /etc/ld.so.cache
root@kitt:~# time ldconfig
real 0m0.148s
user 0m0.130s
sys 0m0.018s
root@kitt:~# time ldconfig
real 0m0.148s
user 0m0.129s
sys 0m0.019s
Compare this with running it through ionice :
Code:
root@kitt:~# time ionice -c3 nice -n 19 ldconfig
real 0m0.160s
user 0m0.146s
sys 0m0.014s
Run it a few times to see the results, but it's always slower to run through ionice. What you propose makes sense if the tools took a long time to run -- which they would have in the past, but don't any more on modern hardware.
funny enough, my system does run it similarly fast?
Code:
bash-4.3# smartctl -a /dev/sda| grep -i 'Model'
Model Family: Seagate Samsung SpinPoint M8 (AF)
Device Model: ST320LM001 HN-M320MBB
bash-4.3# time ldconfig
real 0m0.220s
user 0m0.155s
sys 0m0.037s
bash-4.3# rm /etc/ld.so.cache
bash-4.3# time ldconfig
real 0m0.213s
user 0m0.166s
sys 0m0.026s
bash-4.3# time fc-cache
real 0m0.052s
user 0m0.007s
sys 0m0.003s
bash-4.3# time gtk-update-icon-cache
real 0m0.050s
user 0m0.000s
sys 0m0.003s
bash-4.3# time update-mime-database /usr/share/mime
real 0m1.656s
user 0m0.878s
sys 0m0.203s
bash-4.3#
however, including the patch to rc.M saves no less than 12 seconds to login
how about that?
give it a try:
add a uptime command to rc.local and boot with and without my patch? i use it since 13.37 'til now.
postponing: ldconfig fc-cache update-icon-cache update-mime-data-base
ought to make the boot sooner to login (init 4)
having autologin (my personal case)
and few other tweaks not included for it to be a clean patch
one saves at least about 7 or more (up to 17) seconds...
amongst them is the 5->2 sec HDD mount delay and any other unconditional sleep issued all a long (in inet1.conf i limit the dhcpd wait to 2 sec too)
now maybe ldconfig and fc-cache aren't spectacular savings, but they seemed to slow the rc.M and once postponed it aparently exited sooner?
maybe time Your non-patched vs patched rc.M ?
of course same applies to rotating discs, not SSD with their large ram caches.
@drmozes
2. I have yet to find a 1TB storage device in SSD that i can afford,...
Me too, so what I did was get a small and affordable SSD (120GB) and use it for my system, whilst home, data and archive partitions go on a normal HDD. Result? Very fast boot, but plenty of storage space!
sda1 10MB efi boot partition
sda2 60GB root partition (15GB used with Slack 14.2)
sda3 4GB swap
47 GB unallocated - for future second OS installation?
sdb1 /home
sdb2 /data
sdb3 /archive
The system boots like greased lightning!
However, I also have an old, but very useful Acer Aspire One mini-laptop. It has a 1TB spinning rust disk, and takes forever to boot! I will probably give your speed up method a try on that.
(In case anyone is wondering why I'm still using an elderly laptop like that, it handles hi-def video with the linux kernel drivers, has an hdmi output, and is very compact. And I don't like touch screens! Its almost impossible to find anything like that nowadays!)
Yes, Slackware boots in a jiffy-or-two from an SSD ( mine gives me a runlevel 3 login in about 10-sec, including starting up Moria, Postgresql and VMWare Workstation ).
One Q though about installing root on the SSD ...
What do Y'All do about /tmp/ and /var/ ?
I am looking for a good solution for 'hybridizing' some Linux Appliances ( SSD + HDD ) and it seems like I don't want /var/ and maybe not even /tmp/ on the root SSD ...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.