ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Currently it's quite difficult to figure out what your compiler is doing because you put the functions in alphabetical order. It's a bit like reading a story when the pages are sorted alphabetically.
We wrestled quite a bit with the "proper" ordering of routines. It turns out -- at least we thought -- that sometimes you want them one organized way, sometimes another. And we're quite sure that in real brains, things are not in any particular hierarchical order. So in the end we said, "Let the programmer arrange them however he wants," and got into the habit of simply finding what we wanted when we wanted it (using the "CTRL-HOME, CTRL-F, start typing" sequence on the keyboard).
To trace our compiler's logic, for example, all you need to do is find "To compile a directory" and then chase down each of the called routines using the same (keyboard) technique. But that, of course, brings us to a particularly salient example of what I said a moment ago ("sometimes you want them one organized way, sometimes another") -- for some folks, in trying to understand the thing, would prefer a "depth first" search; while others would think "breadth first" made more sense; while still others would combine both kinds of search as things seemed interesting or not. Which is why we use arbitrary (or alphabetical) ordering in conjunction with the Find command, rather than mount a futile attempt to sequence the routines in a fashion that will be pleasing and meaningful to all.
I think sag47 somewhat exaggerates the extent that the Linux kernel changes, probably you will only be using the very basic system calls like read(), write(), stat(), readdir(); those don't change.
Apologies, last time I compiled from one major kernel version to another I remember having to recompile other things; I can't quite remember what they were. I still think I brought to light an important point about Gerry's 30 year mission in his manifesto. I think keeping the future in the mind for porting the compiler is important for the project to reasonably last 30 years.
Quote:
Originally Posted by Gerry Rzeppa
To trace our compiler's logic, for example, all you need to do is find "To compile a directory" and then chase down each of the called routines using the same (keyboard) technique.
I agree about the ordering of the programming with ntubski. Something like "To compile a directory" is not inherently obvious. If you put it at, say, the top of the document then it might be the first thing I read and where I start looking at how the compiler works.
I agree about the ordering of the programming with ntubski. Something like "To compile a directory" is not inherently obvious. If you put it at, say, the top of the document then it might be the first thing I read and where I start looking at how the compiler works.
Okay, seems simple enough, though others have suggested that the primary routine should be at the bottom of the file since (logically, at least) all the other routines have to be defined before they can be called. But what order is appropriate for the rest of the routines? Should we map the call tree depth first or breadth first? And where should we put routines that are called in more than one place?
These same issues arise in database design and are the primary reason for the general failure of hierarchical systems and the general success of relational ones -- specifically, the former force a particular "view" of the data on the physical structure, while the latter do not. I think what would serve everyone best is a "relational" approach to the problem: a handful of additional editor commands that would arrange the routines, on demand, into depth-first, or breadth-first, or alphabetical, or other desirable sequence. I've got no problem with that at all. In fact, the alphabetical sort is already implemented. I suspect the others will quickly appear once we get the thing converted and the Linux community of programmers is actively extending the system
Yo, Sergei. How about we start fresh and give the moderators a break? You ask, I'll answer.
I see inconsistencies in your testimony - you intended to ignore me.
So, where are we ? How did you resolve the fundamental problems which caused computer languages creation in the first place ? No pep talk please, just the naked truth.
Since every new theory must explain even more than the previous one, any successor theory capable of subsuming it must meet an even higher standard, explaining both the larger, unified body of observations explained by the previous theory and unifying that with even more observations. In other words, as scientific knowledge becomes more accurate with time, it becomes increasingly harder to produce a more successful theory, simply because of the great success of the theories that already exist.
?
That is, prove to me and others that your approach solves all the problems already solved by other approaches, and solves problem not yet solved by other approaches.
Okay, seems simple enough, though others have suggested that the primary routine should be at the bottom of the file since (logically, at least) all the other routines have to be defined before they can be called. But what order is appropriate for the rest of the routines? Should we map the call tree depth first or breadth first? And where should we put routines that are called in more than one place?
These same issues arise in database design and are the primary reason for the general failure of hierarchical systems and the general success of relational ones -- specifically, the former force a particular "view" of the data on the physical structure, while the latter do not. I think what would serve everyone best is a "relational" approach to the problem: a handful of additional editor commands that would arrange the routines, on demand, into depth-first, or breadth-first, or alphabetical, or other desirable sequence. I've got no problem with that at all. In fact, the alphabetical sort is already implemented. I suspect the others will quickly appear once we get the thing converted and the Linux community of programmers is actively extending the system
Yes and no. In C, you can define the functions within in a header file so the order you put everything in the .c file don't necessarily matter. C also has a standard for where you start looking... main(). I, and anyone who has ever played with C, knows that. On the other hand, your language is new, and doesn't necessarily follow any standard outside of attempting to be plain English as best it can. That being said, here's what I, and others have complained about trying to look through your source: we don't know where to start. Because there is no normal standard main() that will be in every source file. So perhaps an additional README for how to analyze the compiler source (hints where to start, etc) for someone who has never looked at your language before would be highly helpful.
Just a suggestion. Also comic sans is difficult to read in very large documents.
... (logically, at least) all the other routines have to be defined before they can be called. ...
And what do you do with mutually recursive things in general ? Or the very notion of mutual recursion is not present in your worldview ? Or you need an a programmer who is open minded to the extent he is ready to forget about mutual recursion existence ?
Ah, but it does. I've been an educator for a very long time and I know from actual experience that it's easier to teach kids to program in English than, say, Python or -- God forbid! -- something like C++. ...
The shocking possibility that dumb people don’t exist in sufficient numbers to warrant the millions of careers devoted to tending them will seem incredible to you. Yet that is my central proposition: the mass dumbness which justifies official schooling first had to be dreamed of; it isn’t real.
PART ONE
Of Schooling, Education, And Myself
Our official assumptions about the nature of modern childhood are dead wrong. Children allowed to take responsibility and given a serious part in the larger world are always superior to those merely permitted to play and be passive. At the age of twelve, Admiral Farragut got his first command. I was in fifth grade when I learned of this. Had Farragut gone to my school he would have been in seventh.
.
So to me your effort appears to be in the direction of dealing with dumbed down students instead of first of undumbing them and explaining them why human languages are not suitable for programming.
I doubt you'll go the strike the root way - most likely you (and thus the "open minded" programmer you are looking for) are paid by those in whose interests it's to continue dumbing down children.
Last edited by Sergei Steshenko; 03-05-2013 at 11:23 AM.
The more I learn about Linux, the less I'm attracted to it. The need to recompile my applications when an upgrade comes out sounds more like slavery than freedom to me...
...In fact, our compiler automatically converts properly defined units of measure as necessary. "Wait 1-1/2 minutes" and "Wait 90 seconds", for example, are equivalent because we told the compiler, in the noodle, that "A minute is 60 seconds."
The item in bold is not properly defined in the first place. I read it as (1-1/2) minutes which according to my knowledge of fractions is (1/2) minutes which is 30 seconds.
So, I am asking again, how do you deal with human languages ambiguity in your "plain English" compiler ?
As far as I can tell, you are using a language that is so ambiguous it requires significantly more text and explanation to get the point across clearly than any programming language that currently exists. My boss will quite often ask me to present some data in some specific format for a paper/proposal. It usually takes at least twice as long for him to explain, using "plain English", how the figure should look than it actually takes me to write the code to generate it. This is because of the ambiguity in the English language. Things have to be said two, three times to define the bounds and describe it unambiguously. This is not a problem with existing programming languages. This is why programming languages look the way they do.
You are building a compiler that has no support for linking external libraries. This means that every program the user writes must be written from scratch. There's no code database to grab from, no standard libraries that can be linked in. Not only that, but once the user does start developing some of their own libraries, they must be copied into every single program that uses them, which means general maintenance, bug fixing, and optimization of these libraries becomes nearly impossible. Can this compiler even link in objects written in another language?
Last edited by suicidaleggroll; 03-05-2013 at 10:10 AM.
As far as I can tell, you are using a language that is so ambiguous it requires significantly more text and explanation to get the point across clearly than any programming language that currently exists. My boss will quite often ask me to present some data in some specific format for a paper/proposal. It usually takes at least twice as long for him to explain, using "plain English", how the figure should look than it actually takes me to write the code to generate it. This is because of the ambiguity in the English language. Things have to be said two, three times to define the bounds and describe it unambiguously. This is not a problem with existing programming languages. This is why programming languages look the way they do.
If English were as ambiguous and unwieldy as you make it out to be, it would have been impossible for us to write, conveniently and efficiently, the program in question: a complete and integrated development system including a unique desktop, an elegant text editor, a straightforward file manager, a hexadecimal dumper, a native-code-generating compiler/linker, and a full-featured wysiwyg page layout application -- in just 20,000 lines of the stuff. So the program itself stands in striking and concrete opposition to your assertions.
Note that I'm not saying that the same program coded in, say, straight C, would contain as many or more characters; it would undoubtedly have less. But it would also lose a great deal of its intuitive character, its explicability, its friendliness to beginning programmers -- in short, it would be significantly less charming.
And again, I think you're misunderstanding what we're proposing: a programming language that is natural at the top but supportive of specialized syntaxes wherever appropriate. Here is a link to a translation of one of Einstein's early papers on Relativity, so you can see what we have in mind:
Note that the bulk of the thing is "coded" in a natural language. Surely if a genius like Mr. E. finds it advantageous to speak and write in this way, we should not so quickly and summarily dismiss the idea.
Quote:
Originally Posted by suicidaleggroll
You are building a compiler that has no support for linking external libraries.
Simply not true; the current version supports efficient and convenient access to all of the various components of the Windows operating system, and is also able to call (or dynamically execute) any other chunk of code on the machine.
Quote:
Originally Posted by suicidaleggroll
This means that every program the user writes must be written from scratch.
Not so. See above.
Quote:
Originally Posted by suicidaleggroll
There's no code database to grab from, no standard libraries that can be linked in.
Not so. See above. And don't forget that "the noodle" is a standard library.
Quote:
Originally Posted by suicidaleggroll
Not only that, but once the user does start developing some of their own libraries, they must be copied into every single program that uses them, which means general maintenance, bug fixing, and optimization of these libraries becomes nearly impossible.
As I said in a previous post, there's "a time for replicated code, a time for common repositories": We're in favor of both. In the current version, the noodle represents the former, the Windows operating system the latter; the one is accessed by internal calls, the other with external calls; the former are statically linked at compile-time, the latter are dynamically linked at run-time.
Quote:
Originally Posted by suicidaleggroll
Can this compiler even link in objects written in another language?
If English were as ambiguous and unwieldy as you make it out to be, it would have been impossible for us to write, conveniently and efficiently, the program in question: a complete and integrated development system including a unique desktop, an elegant text editor, a straightforward file manager, a hexadecimal dumper, a native-code-generating compiler/linker, and a full-featured wysiwyg page layout application -- in just 20,000 lines of the stuff. So the program itself stands in striking and concrete opposition to your assertions.
Clearly your language is superior to all other programming languages and has absolutely no downsides. Good luck with your development.
How about a program that loads data from two binary files with a defined format (say 64 bit words, bits 0-8 are health status, bits 9-31 is a timestamp, and bits 32-63 represent a 32-bit floating point measurement). It then performs a cubic spline interpolation on the second data set to the timestamps from the first data set (so they are now on the same time grid), applies a discrete Nth order butterworth highpass filter to both data sets to remove slow variations, performs a cross correlation to determine the time lag between them, and then writes out the results. Of course this is a real time data stream, so it must work in batches of 1000 measurements, continuously going back to the data stream, loading the next 1000 measurements, performing the analysis, writing the results, going back to the stream to grab more, etc. until the program is stopped. And of course since it's an IIR filter, you'd need to save the last N inputs and outputs from the previous batch to continue the filtering on the next batch.
That's the rub. NO programming language will solve problems like that by themselves.
In fact, if the language is so terrific that it encourages you to skimp on the planning/designing/prototyping stages then it is counter productive.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.