Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Back in the 1990s when the Pentium was out I upgraded. I bought an AMD586-133-P75 based '486 board. It had a 486-133 motherboard, with this jumped-up (single core) '486 which ran '586 instructions (slower). The trick was to set the m/b on the 'DX2-80' settings, whereupon it ran 160Mhz. Ram was 128MB, hard drive was puny, but this was probably the fastest loading box I ever had, in terms of kernel & X, and faster than all the superior peripherals we have today. And we had X,sound,internet, all you'd expect today.
Admittedly, linux has gone up from a cdrom or two to a dvd, and some libraries & executables simply wouldn't fit in 128MB of ram. Why is everything so big? Have the devs left in redundant code for addressing ISA cards? Serial ports? Expanded Memory?
How come a newish box with multiples of memory and cpu frequency and 2 cores can't pass out an old banger like that surely was?
have you checked [compared] already the running processes/services, the kernel threads, drivers?
(there was no PCI-e, bluetooth, wifi, 5.1 sound, UHD display, perl, python, html5, online games ...)
Software is bigger because of functionality. Operating systems have improved and user interfaces have improved, well, in looks anyway. All of my systems are lightening fast, even my bloated Windows gaming machine: it's running on a nvme drive and from power off, is booted in about 10 seconds. Hardware is dirt cheap these days so developers are free to add whatever features they think makes a better user experience.
Software is bigger because of functionality. Operating systems have improved and user interfaces have improved, well, in looks anyway. All of my systems are lightening fast, even my bloated Windows gaming machine: it's running on a nvme drive and from power off, is booted in about 10 seconds. Hardware is dirt cheap these days so developers are free to add whatever features they think makes a better user experience.
Pretty much this.
I forget how many lines of code Torvalds says get added to the Linux kernel daily, but the number is fairly staggering over time, especially if you compare it to 15-20 years ago. And just because you have multiple cores doesn't mean a specific app is going to be multithreaded to take advantage of it.
"November 30, 2010 – San Jose, CA – Splashtop Inc, the worldwide leader in instant-access computing, today announced the immediate availability of SplashtopR OS (beta), a lightweight, web-centric operating system optimized for notebooks and netbooks."
There was perl, and there was python. Don't ask me versions. My current box has an SSD in it already. My current box doesn't have surround sound, or bluetooth. I'm not into online games, and never was. I don't accept that a new desktop pc should have 8-16 cores.
Of course there is bloat over time and we have security patches in abundance; we even had microcode patches for Meltdown & Spectre. But it looks like things are never going to get faster, which is a shame.
Distribution: Mainly Devuan, antiX, & Void, with Tiny Core, Fatdog, & BSD thrown in.
Posts: 5,525
Rep:
Back in the day, most programs were either written in C, or even assembler, coders have got lazy, & use 4th or 5th generation languages now, which of course has a knock on effect on speed of execution - plus most applications have grown exponentially, so also slow the processor down.
If you were to run DOS on a contemporary machine, it would be instant....
Back in the 1990s when the Pentium was out I upgraded. I bought an AMD586-133-P75 based '486 board. It had a 486-133 motherboard, with this jumped-up (single core) '486 which ran '586 instructions (slower). The trick was to set the m/b on the 'DX2-80' settings, whereupon it ran 160Mhz. Ram was 128MB, hard drive was puny, but this was probably the fastest loading box I ever had, in terms of kernel & X, and faster than all the superior peripherals we have today. And we had X,sound,internet, all you'd expect today.
Admittedly, linux has gone up from a cdrom or two to a dvd, and some libraries & executables simply wouldn't fit in 128MB of ram. Why is everything so big? Have the devs left in redundant code for addressing ISA cards? Serial ports? Expanded Memory?
How come a newish box with multiples of memory and cpu frequency and 2 cores can't pass out an old banger like that surely was?
A box from today will blow that type of machine away with its performance. The code left in the OS to support that old technology means nothing. There is the possibility the scanning for hardware may slow down the machine slightly on startup but that old code is never loaded if the hardware is not found during the scan. What you lament now is the lack of any really new massive improvement that are being made in the underlying architecture of the machines. Like the last really big improvement by Intel when they switched to the Core2Duo a vast improvement in both speed and the lowering of the power consumed by the chips. Everything by them since then has been minor tweaks with small incremental improvements nothing really earth shattering in the designs. It has mainly been add more cores to the designs for a theoretical increase as long as the software is well optimized to take advantage of them. Unless processor designers do like Apple have done with their designs of the ARM chips they now are using increasing the speed of the execution of the instructions run and lowering the power consumed it is not going to change any time soon for the old lines. In short lack of innovations in the designs and the milking of profit from the old will keep it like you complain about for the near to long term future for the dinosaurs of the chip making business.
You can certainly see the difference if you talk about something that uses processor horsepower, and bus speeds, like encoding videos with ffmpeg. Or compiling a bunch of C.
I can recall...
The first time I tried to encode a video into x264 on a PIII 800Mhz, I got an output of 1fps.
The same thing on a P4 @2GHz I got 4fps.
On a dual core intel @2.9Ghz I get 24fps,
On a Ivy Bridge machine @3.8Ghz, 4 core 4 threads, I get 60fps.
On a Hazell machine @4Ghz, 4 core 8 threads, I get 80fps.
In fact they are so fast now, and generate so much heat, that I limit the processor max speed. You can get a 7 year old machine now that someone has thrown away, install Linux on it, and put it to good use.
Quote:
Why is everything so big?
Firefox is a good example. Do you remember when it was at version 1? KDE is another example. Remember version 2?
If you want an operating system that shows you the real speed of current hardware, I suggest one of two options:
#1 install an old OS on current harware, and on legacy hardware, and compare the speed. Doing a compile of my BBS software in FreeDOS using the FreePascal compiler is a great example. Compile time goes form legacy 56 minutes to 12.9 seconds.
#2 install a current operating system optimised to use the speed of the hardware, such as KolibriOS. It just RIPS! (As it should, Kolibri is a everything-from-assembler project and KolibriOS is nearly 100% machine code!)
Modern operating systems are generalized to work with a LOT of different hardware bits, and take advantage of all of that hardware improvement to do a LOT more things fast enough for the average user. They do a good job of that. They are BUSY, because they are doing an INSANE number of things at the same time: something we did not often see on our Z80 processor CPM machines!
I was putting it down to the enlarged size, and some higher level language use. And of course multicores help.
It just seems crazy that the compile might speed up but the boot time doesn't.
But is is ever going to get any better?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.