Compiler conundrum: Which came first, a compiler, or it's source code?
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
a compiler is not one piece of software. it starts out with a lexer, then a parser ... and ends with the assembly code which the linker takes to map virtual addr to the physical addresses. so, basically, it's a tool which checks for syntax and semantics of a language, creates the code readable by the machine, and ...
Ya, they built simple (relatively anyway) compilers that they used to compile the more complex compilers and it all just evolved over time.... basically. But as was said above. A compiler is really a group of several different tools and then were built mostly from the ground up the first time.
I used to have a box that plugged into the 'user' port of my C64. It had a bunch of switches and leds and basically allowed me to toggle code directly into the memory. Even the smallest program was tedious enough, but it was fun and taught me a lot about programming early computers.
Hmm. This is interesting. Thanks for all your replies. Compiling is part of my daily routine (obviously) and I've been curious about its origins. I think I have a better understanding of just how advanced things have gotten.
An interesting point is that most compilers are written in the language they are built to compile...
A good example is c#. The first c# compiler obviously couldn't have been written in c# so they must have built a compiler in another language to develop the first c# compiler.
An example was the transition form C to C++. The first C++ systems had preprocessors that converted the C++ source to C source then ran it through a regular C compiler. It took a few years for vendors to deliver compilers that compiled C++ directly.
Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16
actually, there was i believe a 'b' programming language, which was woefully inadequate, the 'c' language's origin (as alot of you probably know) was for the purpose of making the unix operating system more portable instead of having to re-write (or at least alot of adapting) it in assembly on everytime a new mainframe was developed hence the c language was born, for this purpose...
There was also a precursor to the language B. I haven't ever been able to find an official name for it anywhere, but one might assume it was called A. I suppose it is even possible the original C compilers were built with B but I am pretty sure that is not how Stallman started GCC. Maybe somebody should download the first version of GCC and see how it is built, and also see if it can build itself once it is built. My guess is it can.
fortran - stands for formula translation and it predates c by seversl years. It was developed by IBM and some of it's customers in the mid fifties. For a long time it was the most used language for technical problems.
c was not based on it at all though
there were a, b and c... as much as I know... anyways those things are "lost" in history.... perhaps in a million years there will be paloenthologists ready to clear it up.
The fact is that in the "early ages" programs were written directly to binary code.... no pnemonics.... only bynary data.
Some people must have realized that a "compiler" was necessary in order to share code between architectures... and of course, to be able to program at a "higher level".
First compiler (or compilers)..... written in bynary, of course. After that.... why write in bynary anymore?... use the compiler you made before to "build" a new compiler. (sound paradox-tical, right?)
Actually the intermediate step is assembler. The first programs written in binary quickly gave way to assembler as it was far more 'human readable'. After assembler the macro assembler came along which allowed reuse of modules of code. One of Microsoft's early success was a macro assembler for Intel chips that was the standard for years. MASM was still supported until a few years ago at least.
The early compilers compiled C or Fortran or whatever high level language into assembly code then ran it through an assembler then it was linked. These steps were slowly merged into a single process which started with source and produced an executable.
Many compilers still support a switch to output assembly language for debugging and optimization.
Ya, gcc has an option to see the assembly code it produces. It is quite interesting to look at when you compare how it assembles a C program at different levels of optimization. The assembly code at no-optimization is actually quite readable and easy to understand. However, it starts to get cryptic very quickly.