Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Perhaps up for discussion/debate: Arm may very well be the future. How worried should Intel be? Should Intel start making Arm processors? What do users think about Linux on Arm. Add any discussions/debatable comments you wish please...
Last edited by kernelhead; 08-21-2022 at 08:50 PM.
It's been a few years, but in some interview Torvalds said that he isn't actively writing kernel code anymore, just sort of overlooking things. And yeah, correct me if I'm wrong.
Anyhow, he doesn't much care about the distro he's using (just as long as it isn't a pain to install or use), and I guess that the underlying hardware is of no relevance to his recent work.
I'm most intrigued and interested in your statement, here, that ARM will never be as capable as Intel or AMD. When Apple, recently, put the M1 in their laptops & desktops, I did a fair amount of reading about ARM. I'm interested, if you would be so kind to, to elaborate on your reasoning here. Would you argue that Intel/AMD will be able to keep the speed advantage in their processors. I do believe that ARM has already demonstrated that, at least, it has the advantage in running at much lower temperatures. I'm also interested in other users opinions on this - whether they agree with you or disagree, & their reasoning too.
Thanks for the link and I read it twice. It does get a bit complicated, to me anyways, as ARM and Risc are very much related, but the article suggests that ARM and Risc may end up competing with one another.
Anyways, I digress. My personal opinion in how the M1 has succeeded for Apple, is because Apple controls their hardware and operating system. Software companies, like Adobe, have no choice but to release software to run on the Arm/M1 for Apple laptops/desktops, as M1 is the only option. With PC's, Microsoft has made ARM computers (the Surface) & thus Windows will run on ARM. But Arm hasn't taken off on the PC side, again in my opinion, because Microsoft doesn't control/make most of the hardware in the PC world. It can also be noted that I've read that Microsoft has Servers running on ARM & also there is one PC company, I forget it's name, that has been maKing hardware based on ARM.
I'm most intrigued and interested in your statement, here, that ARM will never be as capable as Intel or AMD.
Maybe my opinion on that is a little too strong, and these things are being developed at break neck pace so it may be outdated soon enough.
Admittedly, my own experience with ARM CPUs is little more than in the dinky devices we tend to carry around.
To give a practical example of my own findings, virtualisation and emulation don't run well on ARM chips. These are two technologies that I use daily and cannot live without.
Quote:
Originally Posted by kernelhead
When Apple, recently, put the M1 in their laptops & desktops, I did a fair amount of reading about ARM.
I found this opinion online, which sums up the problems with ARM:
Quote:
Originally Posted by some internet rando
ARM processor don’t tend to scale up well in the high performance segment since they don’t have extensions comparable to what SSE3/4 or AVX offers on the x86 platform. Support is being worked on to bring scalable vector extensions for data centers and HPC but no current ARM products use it.
You also have the issue of simpler instructions which are inadequate for heavier workloads, on ARMv7/v8-A they’re all 32-bits long even in 64-bit mode (Aarch64). By contrast, x86 benefits from instruction density since it has variable length instructions (up to 120-bits) and more complex instructions, meaning that less bits are required to express a single instruction. Take for example the addq (%rax,%rbx,2), %rdx instruction which is executed in one go on modern x86 cores. To do the same thing on ARM, you would have to use the load instruction before using the add instructions meaning that you’ll need more cycles to do the same thing.
Another disadvantage of ARM systems concerns device enumeration which is a no brainer on x86 systems due to industry standards like BIOS/UEFI. An OS running on x86 systems can know exactly what is attached or connected to X or Y bus. On ARM there is no such thing because of proprietary buses and non-discoverable components. Take Linux on ARM for instance, if you want to tell it what features, buses or devices are there you must use workarounds like device trees/device tree overlays and a lot of out-of-tree blobs which are not open source. ARM platforms don’t have a unified industry-wide BIOS, instead each vendor uses its own implementation and sometimes their own internal buses. On x86 most hardware device have open source drivers and is generally easy to setup, you don’t need those hacks and workarounds. ARM’s business strategy revolves around their multitude of IP (from ARM cores, interconnects, buses and Mali GPU) which they license to their ‘partners’. Each partners can then design their own SoC implementation which is not necessarily similar to one another. This is why it’s hard to have a fully functional OS+kernel for all ARM platforms.
Another thing is cache coherency and high-bandwidth buses. This is something that the ARM platform struggles with due to power constraints (buses can use a lot of power!) and ARM’s obsession with small dies, which is why until very recently ARM cores lacked L3 cache for cache coherency and instead use a fancy CCI bus to do so.
I'm somewhat removed from CPU development, but I guess many of the issues are being worked on?
Either way, this move by Apple will only increase the aforementioned pace of development, and could prove to be a good thing in the long run.
Quote:
Originally Posted by kernelhead
Would you argue that Intel/AMD will be able to keep the speed advantage in their processors. I do believe that ARM has already demonstrated that, at least, it has the advantage in running at much lower temperatures.
Distribution: Mainly Devuan, antiX, & Void, with Tiny Core, Fatdog, & BSD thrown in.
Posts: 5,539
Rep:
ARM & RISC is/will be in consumer products, some servers, but I don't think that they will take over serious heavy computing environments, at least, that's my take on it.
P.S. Raspberry Pi SBCs have run on ARM from their inception.
As someone with the hardware perspective, Big endian devices are much more elegant designs than little endian. This translates into lower cycles per instruction.
Sure x86 devices are far ahead on muscle. Arm are far ahead on power-per-watt, and that may become very important in the future. Optimization has limits, and Intel & AMD are pretty much there. Arm will catch up, but have to thread their way between Intel patents, AMD patents, Meltdown & Spectre.
It's also worth noting that Arm CPUs have usually been 1Ghz-2Ghz devices. That is changing. Apple's first Arm drastically cut compile times by boosting to 4.5Ghz. But they are using 5nm litography.
Ampere computing sell servers with an 80 core Arm A-76 cores (optimised for multiprocessor boxes), and double resources (2 separate 128 bit memory buses, oodles of PCI lanes). They run at 3.0/3.3 Ghz and their proprietary cooler ensures they stay in boost 100% of the time. The cpu uses 250W = 3W per core. I spotted some here among a selection of arm servers. Their lithography is 7nm.
Raspberry Pi is interesting. My (early) Pi 4 runs @1.5Ghz. Later ones run at 1.8Ghz, with a variable cpu power output. You can configure an 'overvolts' setting in config.txt on the later ones and overclock to 2.2Ghz without issue, and even higher. The Pi varies after that, the individual board. The one article I read went to 2.8Ghz. Even 1.5 - 2.0Ghz is a 33% improvement. But I don't think Broadcom went below 16nm lithography.
It may take 10 years, but there's room for significant optimizing in Arm. In 5 or 10 years, Arm will be the big boy. It already has the mobile market, and Macs. The slice may get smaller for Intel/AMD.
Last edited by business_kid; 08-23-2022 at 05:47 AM.
Maybe my opinion on that is a little too strong, and these things are being developed at break neck pace so it may be outdated soon enough.
Admittedly, my own experience with ARM CPUs is little more than in the dinky devices we tend to carry around.
To give a practical example of my own findings, virtualisation and emulation don't run well on ARM chips. These are two technologies that I use daily and cannot live without.
I found this opinion online, which sums up the problems with ARM:
I'm somewhat removed from CPU development, but I guess many of the issues are being worked on?
Either way, this move by Apple will only increase the aforementioned pace of development, and could prove to be a good thing in the long run.
And at lower rates of power consumption.
Yes, I believe that problems are being worked on. Also, I'm curious as to how many of these "tasks" are for very high end computing - it seems that Apple's M1 processors are thus far adequate for personal/home computer users (?)
Also, I'm curious as to how many of these "tasks" are for very high end computing - it seems that Apple's M1 processors are thus far adequate for personal/home computer users (?)
Surely, that's a matter of perspective? How do you define the average personal/home user?
If you're talking about word processing, emails and web browsing, then yes you're right... but one practical example of where it currently fails is that Steam still doesn't work properly on an M1 Mac. If/when that problem gets fixed, then the door will certainly open for more users.
To give another simple, yet practical example: There are developments like this one, driven by Apple's choice to use an ARM CPU: https://www.linuxadictos.com/en/dosb...novedades.html Things like this might be completely irrelevant for you, but from where I sit that's quite a positive outcome. If I were shopping for new hardware to use at home, I'd instantly dismiss anything which couldn't run DOSBox and similar software programs.
And a 3rd practical example: The box which I have at my (small) office is a 3 year old Intel NUC 8i5 running a bare metal hypervisor with 5 VMs on top of it. Each of the VMs are necessary for my work, and without the ability to virtualise I'd require 5 physical machines. The combination of hardware and software has been near bullet-proof, is scalable and enables high levels of productivity. This is why I say that the capability of virtualisation is a requirement for me. There is a lot of development and testing in this space from the major players, but official/commercial products are still quite thin on the ground.
So you see, I wouldn't have thought that my requirements are "high-end," but at this point there are some practical limitations to choosing an ARM CPU. Between Apple and Linus, they might sway the future and I dare say that in 5 or 10 years the conversation will be very different.
It may take 10 years, but there's room for significant optimizing in Arm. In 5 or 10 years, Arm will be the big boy. It already has the mobile market, and Macs. The slice may get smaller for Intel/AMD.
And of course ARM is used 'everywhere' in the embedded/SBC market.... So in that sense, ARM is the dominate CPU architecture in use by shear number . The new little RP2040 chip is available for $1 for board designers, and for us users -- $4 to $6 for very usable boards.... Intel and AMD CPUs are really used in a 'niche' market .
For general home use (my definition is browsing/e-mail/docs/spreadsheets/cropping pictures type work), I found the RPI-4 'adequate' as long as you used an external SSD drive for booting/loading applicatons. For compiling the Linux Kernel, nope. For Video editing, nope. Not there yet. My needs require a higher end desktop CPU (I use Ryzen 5000 series) for my development cases. So for me, ARM isn't there yet for my desktop/development system. If you are a gamer... definitely not.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.