Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hand-coded assembly language is only found, in the Linux kernel, in the so-called "trampoline" modules that are used to initiate the boot process. (For instance, tossing a freshly-reset x86 microprocessor from real to protected mode, and "bouncing it off the trampoline" on a one-way trip into the Linux kernel startup-code.) Otherwise it is found, as asm{...} blocks within "C" source files, only in the architecture-specific /arch directory.
This is typical.
Modern microprocessors really aren't designed anymore to accept hand-made assembly code: they're designed to accept the output of compilers, and microprocessor designers work closely with the architects of those compilers (and, produce compilers of their own) to cause compilers to generate optimal machine-code sequences for different architectures and models.
If you want the very best machine-code to do something, write a small "C" program. Seriously. You simply won't come up with anything better, doing it by hand.
Last edited by sundialsvcs; 08-13-2017 at 08:45 PM.
Modern microprocessors really aren't designed anymore to accept hand-made assembly code: they're designed to accept the output of compilers, and microprocessor designers work closely with the architects of those compilers (and, produce compilers of their own) to cause compilers to generate optimal machine-code sequences for different architectures and models.
Can you provide a link backing this up? As far as I'm aware ASM (machine code) is the only language a processor understands. All the High Level Coders are compiled into ASM so that they can run on the processors. Cross compilers are so that you can move the code from one processor type to another but is should still be ASM code that the processor is getting.
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524
Rep:
ASM is a mnemonic for remembering binary instructions. It's one instruction per mnemonic (per line). Processors run binary code, not ASM. All programming languages compile to binary before they can run on a processor.
Assembly is still used a lot. But, unlike C, assembly is bit different for every processor architecture, and it's different between Microsoft and Linux. The compilers are also different. You would think either the language or the compiler could be standardized, but no.
You don't need assembly on PCs much anymore, because of processing power. But on small systems, such as clocks, digital watches, locks, sensors, electronic instrumentation, vehicle control and sensors, aircraft navigation, smart cards; speed mission or time critical application such as nuclear weapons and guidance systems, industrial process and control, sensor integration; and security applications.
I have a digital multimeter. It must process 50,000 readings per second in order to fully exploit the onboard A/D converter, which converts the analog reading from the internal voltage sensors to a digital representation that can be converted to a digital readout.
The device is actually a specialized computer that measures electrical signals and displays the correct value. It has memory and a processor. Because it must execute code as fast as possible, it's firmware must be written in assembly.
But assembly is tied to the processor architecture. So, learning assembly for one processor type does not completely carry over to another processor. Some aspects of microprocessors are universal, such as basic programming constructs like stacks, registers, jumps, skips, branches, etc.
So, if you're talking about PCs, most programmers don't use assembly anymore.
Assembler is often synonymous with machine code, but not always. It is still a general language and needs translating (Assembling)for a specific machine. In most cases it is pretty much a one for one translation, but it only if the proccessor itelf has a fairly complex instruction set. Many risc machines need several instructions for each assembler statememnt.
Quote:
and it's different between Microsoft and Linux.
On the same machine there will be no difference. MS code and Linux code that perform the same action will use pretty much the same assembler at individual code statement level.
Last edited by dave@burn-it.co.uk; 08-14-2017 at 02:00 PM.
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524
Rep:
Quote:
Originally Posted by dave@burn-it.co.uk
On the same machine there will be no difference. MS code and Linux code that perform the same action will use pretty much the same assembler at individual code statement level.
It is like translating a simple sentence in two languages into a third language. If the original sentences mean the same thing then they will both translate into the same output in the third language.
A request for a cup of tea, for instance in English and German. will both translate to the same in French.
Last edited by dave@burn-it.co.uk; 08-15-2017 at 02:56 AM.
Can you provide a link backing this up? As far as I'm aware ASM (machine code) is the only language a processor understands. All the High Level Coders are compiled into ASM so that they can run on the processors. Cross compilers are so that you can move the code from one processor type to another but is should still be ASM code that the processor is getting.
If I'm wrong then I stand corrected.
As others have said, the microprocessor executes binary instructions – machine code. One way to express those instructions and thus to construct a computer program is "purely by-hand," using an assembler to specify those instructions one-at-a-time.
It's an arcane art, but most of us have done it, and it is valuable to know how to do it – to know what the (conceptual) architecture of the target machine looks like. Today, however, it isn't a practical development strategy in most cases. (Even for many microcontrollers and embedded systems.)
Modern microprocessors are very complex beasts which feature pipelines and internal paralleism which, if exploited properly, can greatly increase execution speed. Creating appropriate instruction sequences to achieve this is quite difficult, however, if attempted "purely by-hand." On the other hand, an optimizing compiler can do so quite readily.
Microprocessor manufacturers engineer their systems to accept the output of optimizing compilers, and they work with compiler writers to develop (and, they put into their own compiler) optimization algorithms that will generate the best instruction sequences in a particular case for a particular processor model. (You can, if you so choose, tell gcc exactly what model, as well as family, of microprocessor you wish to produce machine-code for, and the output will be dovetailed to that chip's design quirks.)
If you study a "disassembly" of the output of a modern compiler, and compare it to the source, you'll find that it is probably considerably different: parts of the logic have been moved around, you encounter arcane instructions that you've never heard of before, several instructions were generated where only one will do, and so on. There's a lot of voodoo going on! And yet, this is what an optimizing compiler is designed to do, and this is what a microprocessor today is designed to accept.
Even within the /arch directory of the Linux kernel, most of the assembly-level code is (selected portions of) specific subroutines. Few of those subroutines consist entirely of assembly code. Only the "trampoline" is likely to consist entirely of assembly code. Basically, assembly is used "only when there is by definition no other choice." The vast majority of the kernel is written in a high-level language. And this is the modern-day role of assembly language programming in most use-cases, but, not all.
Last edited by sundialsvcs; 08-15-2017 at 11:28 AM.
A colleague of mine long ago uttered the statements that all a computer is capable of doing are (1) read-modify-write, and (2) test-and-conditional-branch.
They were absolutely correct. If you take the microcode, all it is, is a bunch of different instruction which are capable of manipulating data, between memory (however it is referenced - direct, indirect, register, pointer, etc) and other memory, and then making a decision to branch or not branch based on test criteria. Similarly when you consider interrupts they were merely prioritized instructions of the same nature, governed by the set of rules we identify IRQ processing with.
Higher level languages do wonderful things, however understanding exactly what those languages do is sometimes very important.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.