The Ten Rules of a Good Programme(r) ?
2 Attachment(s)
Hello,
What may the 10 Golden Rules for programming? Here a first attempt, likely, need to be adapted. 1. Minimum dependencies (as minimum as possible) 2. Smaller code as possible and do not produce bloat. 3. A code shall be readable and destined to humans. 4. The use of { } might have importance for readability 5. Portability, portability,... for any platforms (Mac, MS, BSD, Linux,..). 6. Assembler (for one arch), C or C++ should be highly considered before starting a project 7. ./configure ; make ; make install was, is and will be forever. Make sure that your programme can be easily compiled by an human. A short Makefile is key. 8. Man page - might be but not necessary - 9. Do never mix programming languages, i.e. Python, Java, Perl,.. .it is ugly and non efficient. Avoid System() call. 10. The Golden One: Make sure to have a small programme. Several small programs are better than than a big one. A code creation shall be made step by step from an easy point of view and improved daily, simplified constantly. The progress of programming shall be a process of natural simplification of the programme. The code shall look smaller and more efficient during programming process. It is better to use your own libraries made from scratch to allow portability. Avoid bloated libraries that will not work after few years due to different versions. Prefer basic X11 to bloated libraries when possible. Smaller code as possible to Avoid System Bloat and Mistakes !!! Minimum dependencies (as minimum as possible) since library versions will change over time and wont be compatible. A created good programme shall be easy to compile after 5-10 years, still! The last law is taking example from MS Windows OS. MS has a big programme running the whole best. This is taking vast amount of memory and make the system slow. believe me. Believe me that the smaller your C programme will be the more efficient it will be. In any case, get a quick check at your processes : ps aux, top or htop, mem,... to see how it performs on cpu usage, memory,... Compilation time is also key. A programme shall be compiled in a good realistic time. The use of pointers should be restricted to max. one level. In C, try to stick to minimum portability by using include string, stdio and stdlib at first and to extend if necessary your included libraries. But really, stick to the minimum, it is really important. Maybe in 10-15 years, you have another platform? You might be then happy about good portability. The shorter the code will be the better, but your own libraries from scratch are welcome since they will extend significantly the portability of your programme to another system. "No function should be longer than what can be printed on a single sheet of paper (in a standard reference format with one line per statement and one line per declaration.) Typically, this means no more than about 60 lines of code per function." (Src: NASA) All code must be checked regularly and should pass the analyses with zero warnings. |
Quote:
Another thing he told me was "Learn to forget confidential/personal data you see in files". I always thought was a good rule and I remember some people got in trouble with management when they could not forget and told people what they saw. |
Moderator response
Moved: This thread is more suitable in <Programming> and has been moved accordingly to help your thread/question get the exposure it deserves.
|
Writing your own libraries sounds like a reversion to the Microsoft model in which every program comes with its own packaged library .dll files. When I started with Linux and learned about shared libraries, I was really impressed. New code always tends to be buggy code, and having to write a new set of libraries for every program just to get around copyright rules sounded to me like a recipe for trouble. All the standard Linux libraries have been around for so long that they've had most of the bugs knocked out of them. A little bloat seems a small price to pay for so much convenience.
Also having to write everything to interface with X without using a widget library would completely exclude people like me who want to write the odd program to do some job for them and for fun. I would never have started with C if gtk2 hadn't existed. |
Don't write huge mega-genius functions which span over dozens of screen pages. In a few years, you won't be able to remember yourself what's going on because you moved on with other things. And if even you can't fix a bug (or even extend your code for new functionality), then who else?
Keep your code simple and easy to read. Spend some time thinking about how to make the code easily understandable, and you'll enjoy your own code for a much longer time. If you're adding specialties, don't shy away from writing a few code comments about what's exactly going on. Once your sourcecode grows (megabyte(s) of source code), a bloated, poorly documented mess will strongly reduce the fun factor when it comes to adding new features you hadn't thought of yet... Name your sourcecodes, functions & variables in an understandable way, even if names get a little bit longer. A sourcecode named o13.c with a function named f781() won't help understanding the code very much. Try to keep modularity in mind. Having everything in your project derive from everything else will create a huge complex knot of code which gets difficult to overhaul or re-design, if need be. Try to build smaller modules with clearly defined interfaces, so an entire module can be re-written from scratch, if needed. Such things happen because you're rarely able to see on day one what kind of features & functionality your software will include in, say, 5 or 10 years.. If you write a library, Doxygen is your friend. C++ has matured immensely with C++11 and newer versions of that language. Just beautiful. Don't start with any C++ variant older than C++11. Be careful with IDEs. Sometimes they come up with a huge update and all the sudden your workflow changes or there is something not working as you got used to. Make sure that you keep the option available to use your own build environment, be it CMake or whatever you're using (often a simple Makefile will do with small projects), so you can stay independent of any particular IDE. I always did that and i never regretted it. An IDE is merely a text editor with fancy extras, a debugger and a handful of additional features. No matter how great an IDE might be, i never let it auto-takeover the build environment. This also helps distributing open source projects if they don't require anyone to use a particular IDE to work with.. At least with bigger projects, don't write custom hand-made install/uninstall scripts, as this drives system admins mad. They don't like to read through scripts using tons of hand-crafted rm commands on the root level... let CMake or Autotools deal with that. They're far less likely to accidently delete all the wrong files.. |
Member response
Hi,
Any code project should be well documented with proper comments to show what is being done. That is so readers(sometimes yourself when returning after a long period away) will be able to see the flow of the project. Be sure your comments are specific and relative to the stanza(s) so you will know what is going on at that point. You do not need to comment each line but where specifics and need at that point so you know what is happening. Quote:
Have fun & enjoy! :hattip: |
Quote:
|
Quote:
|
Quote:
I hope that this answer may guide and help you. |
According to that article, gcc & clang both suck, too.. (they even call GCC a "virus"). Now that's a new perspective to me. Given those kind of standards, all computer software probably sucks.. :O
About CMake: the fact that it takes some time building it may be a tad unfortunate, but not a reason to entirely reject it. The only thing is that a user also needs to install CMake to build a software, as compared to autotools which comes kinda self-contained, only requiring a bourne shell and a make. I never really liked that little fact about CMake, but i also couldn't find the perfect alternative build env yet, either. Have you? Anyway, CMake still beats writing custom shell code snippets for installing/uninstalling one's own software on millions of other user's machines.. that's what i was trying to point out anyway. Let's keep it at that in order not to bloat this forum thread with off-topics.. |
Quote:
Anyhow if you compile on GTK+, high version, with G++, your whole system will be so, that it will be negligeable what you do. You will need lot of money to buy a good hardware to compile graphical applications. It won't be that bad, depending on given needs. Fabrice Bellard has made a possible alternative to GCC. |
Get the money up front!
|
More than 50% of all open source software (not just Linux stuff) violates one or more of those rules. Here's my personal summary opinion of it all...
Some complexity is necessary. Those that argue otherwise are naive at best, horribly thick at worst. Autotools is a great example. Given the diversity of platforms, users and library configurations. Something has to deal with all those #ifdefs, and doing the number of them I see in packages by hand is more likely to lead to errors than an attempt to automate it. Do you really want to do aclocal by hand? What about intltoolize? How many spoken languages are you familiar enough not to tell someone that --verbose will put poop in their toaster instead of lead to a more detailed output? If you go this route, #5 is going to eat up years of your life. Generalizations are bad, including "all complexity is bad." Which is exactly what I think they're saying over on suckless.org. If that's the case, you're all violating your own rules by using Harvard machines. Get yourself a 6502 or 68000, Von Neumann 4-eva! What complexity is bad? Needless complexity. And, we certainly have plenty of that. Keep in mind some of this is tongue-in-cheek. If you get butthurted, I won't care. Programmer: Whine! Autotools is too hard! <stamps foot> I know, lets make a new build system! We'll be heroes! Maintainer: Whine! Configure/Make is too slow! Sure, it runs all automated on a remote server but still I like to watch and it's sooo slow. I know, lets make a new build system! We'll be heroes! And so we have: waf, cmake, qmake, ninja, scons, something that began with "y"? Yams, yak, yodel? It's largely dead now. What all this amounts to is pollution that complicates an already complicated ecosystem even further. This problem is even worse when it comes to documentation. Half the stuff I build doesn't have man pages because groff and texinfo simply won't do. (Looks at freedesktop sullenly.) Please stop, you aren't helping. New Programmer: Whine! C/C++/Go is too hard, why can't we use java/node.js/C# to write low-level system code. I know! We'll make a language that converts into C, and call it Vala! It converts into C alright. It does such a good job a writing garbage C that Googling for "no vala code accepted" actually produces some funny stuff. I'm surprised it's not made it into GnuTLS yet. Wait, "introspection" probably has to get there first. Oh, the horror. I could go on and on. I'd like to tell you about how many packages require other packages that really should not but, I'll wheedle it down to two. Why exactly is a bookmark library required to build a window manger (kwin5). Why exactly is a weather applet required to build a desktop panel (mate)? I suspect people just don't know how to do optional dependency anymore. The bottom line is that The Bizarre often gets too chaotic and we're simply not taking the time to prune properly. |
Lurdis I don't often agree with what you say but in this case you hit the nail right on the head!
+Lots |
I don't agree with that ten rules. Actually these rules highly depends on the target and the size/functionality. In some cases these rules can be useful, but in general - there is no general 10 rules.
|
I also disagree with those ten rules. In any valid rule set #1 should be "never start coding before your design framework is complete". In many cases, you should not code until the entire design is complete, but my experience is that you do not always have that as an option. Coding before the requirement is well defined is likely to waste time and effort, but more subtly may result in code that works but is not optimal for the final design. ((perhaps resulting in support consequences down the road.)
Portability is nice if you are coding an application that must port. If you are coding a driver for a device to run only on the Raspberry Pie III for hardware of your own design and only with a specific Rasp OS, of what value is portability? "To a man with only a hammer, the world looks like a box of nails." Pick the right tool for the job. If you are a versatile coder you have many tools in your kit. Code in the tool that is optimal for solving the problem (and maintaining the result), not just whatever is in "style" today. (This depends upon your ability to CHOOSE your tool. I have seen managers REQUIRE a language because it was featured in the last tech magazine they read. Some managers should not be allowed magazines!) |
Quote:
And by the way, when this cabal nightmare is over and our ET friends finally show up, they're gonna show us their operating systems & softwares, and everything we ever developed here on earth will be rendered obsolete overnight anyway! Imagine, all the trouble for nothing.. Just sayin'.. |
Quote:
For instance, I don't know what the story is behind it. But, I know that 2 forks of ffmpeg is not a good thing from an engineering standpoint. It may be was was required so smooth over some bruised egos somewhere, but it seems like it would be a better decision to put personal feelings aside in the face of splitting up work on something that large. Quote:
It doesn't seem to occur to anyone to reduce the scope, finish whatever it is and make extension a new project. It also doesn't seem to occur to anyone that sometimes, throwing out code is a good thing. If you're garage is a mess, you start by taking out everything. After that, tossing out the rubbish is usually your first step... |
Quote:
No, this is not "an abnormal situation." Software is extremely valuable and extremely expensive. (You(!) are expensive!) The business-risk associated with it is basically "off the chart, all the time." And these are the conditions under which you work. If you can't stand the heat, get out of the boiler room. (But, more or less, these are what face all "engineers" face, even when the thing that is being engineered is completely intangible, like software.) |
Quote:
Quote:
If a program isn't compilable after 5-10 years, that would imply that it wasn't maintained during that time. If no-one's updated a a program for 5-10 years, then, well, were the program and the programmer(s) responsible for it actually "good"? |
Quote:
Do you think it might be because NASA actually printed code out on paper as a matter of standard procedure? (Hint: yes) |
Quote:
I also feel that there are no "general" rules. A project should, ideally, complete requirements gathering and design before coding begins...but as has been said, that's not always possible. I believe that each company/shop should have rules for consistency within the organization, and those rules should be strictly followed, but the rules might be different than those in another company or shop. |
Quote:
I'm aware that Xeratul's list is intended to guard against future changes to the technical requirements (such as changes to the target platform's library dependencies or display server). Not saying that his list is the best way to do that, of course. ;) |
Dependencies should only be exactly what is needed to achieve the most basic goal of the program. Features that require extra things, should always be made optional. Especially if they're frivolous features.
Requiring me to build libsoup and a ton of other stuff to support a weather widget that isn't even turned on by default in the Mate-Desktop panel is a perfect example of dependency madness. That thing should be in with the rest of the applet packages. Tying it to the panel reminds me of Packard Bell computers in the 90's. Once I ran into one of these early single board Packard Hell monsters where the modem would not work unless an FDD was connected to the MFM cable from the motherboard. How this is electrically possible in a sane engineering environment is not known to me. But, we (me and the client) verified it in test after test. No floppy ribbon, no dialup process. I was in the Twilight Zone of computer hardware. |
Well one page is highly subjective.
Density -- 600x600 dpi (even cheap printers can do this, faxes are 300dpi and have existed for decades) Size -- 8.5" x 11" - ( 600 * 8.5 ) x ( 11 * 600 ) -- 5100 x 6600 Font -- 8x15 - ( 5100 / 8 ) x ( 6600 / 15 ) -- 637 characters x 440 characters --- (x 825 lines of text if the font is only 8px tall) Also bear in mind that some printers can do 1200x1200 dpi without breaking a sweat, even with 3x3 dots per pixel at 1200dpi for readability and you're still 530 x 687 characters. Make multiple 100 char+ columns and easily 1k lines of code on a single page even if you stretch the font height to double to make it more like traditional fonts. The 5x11 px neep font will fit the entire hier manpage on a single 1080p screen when using Xdmx + Xephyr to divide that screen up and form one really tall xterm. ----- I wouldn't mind a world where all non-C languages compiled / converted their code base to C before making an executable. Various styles of programming, not the any are "more correct". I tend to favor fewest lines of code over readability. But even that wont fit on "one page" if you're doing anything complex. I'm not a fan of one program per process, I tend more towards one program per system that you pass parameters to execute the "part" of the process needed at the time. Imagine having to call ffmpeg_${codec} versus passing a parameter. And distros creating separate packages per codec, all of them not installed by default. It would probably be less buggy and better maintained, but what an administrative nightmare. |
Quote:
https://vimeo.com/221534566 |
I was told at university that the most important rule of programming is to make programs self-documenting. That means informative variable and function names, plenty of comments, proper use of nesting, and clear separation of high-level and low-level code. Functions that deal with concepts should have separate housekeeping functions to deal with the nitty-gritty of implementing those concepts.
If you have to debug a program a year or two after you wrote it, it's like reading a program written by someone else. If it isn't well documented, you can't understand any more how it works. |
Member response
Hi,
I wholly agree with documentation within the code that allow/follow good code practices. I stated earlier in the thread the program content should be well commented. I have seen code where the author commented so sparsely you did not understand what was going on without stepping through the code yourself. The opposite is where the author just about commented every line of code. Overboard for sure. When I code, my comments for a section or stanza will have a intro comment to give a general description of the intended operations then comments within to scope that area. I use Slackware Gnu/Linux and AlienBob's(scripts/tools) to help admin my systems. Eric's scripts are some of the best I have ever used. He clearly documents the scripts and you can easily read for understanding or modify to suit since you are clearly given the states or conditions within. I have written a few programs throughout my life and whenever I go back to patch/re-purpose or even check the code, I was able to see what was going on by reading my comments. Even in my personal toolbox the code snippets are commented throughout so I understand the purpose of the code. Why re-think something when a comment will help you follow to understanding what is going on within. Hope this helps. Have fun & enjoy! :hattip: |
Quote:
"I was told at university that the most important rule of programming is to make programs self-documenting." you got the point!! If you have a clean code, you do not need a manual or doc. It is optional. A clean code does not mean to have comments. :scratch: |
Quote:
int mlcr; <-- Bad int mainLoopCtr; <-- Good WTF is mlcr? Machine language carriage return? Mother-lovin character code? We have figgin auto-completion now, even in stuff like Emacs and Vim. There's no longer any reason to be cryptically brief because you're trying to "save yourself from tunnel carpal". |
I try to avoid mixed case and plurals when programming. And what the hell is ctr? And how is that "good"? I would assume counter. But I could just as easily assume counter terrorism resource, or counter technology regulation, or candice t. rice. At least we've progressed past the point of dropping all vowels to keep names <= 8 characters. At least on some of the languages and platforms.
|
Quote:
Mixing cases makes it easier for your eyes to discern word separations and I hate typing underscores. What is important is the brain has a reason to break up words. If you don't think that's necessary, I suggest you download a plaintext book. Put it in a text editor, remove all spaces and punctuation and then tell us how much fun it was to read. I also prefer Allman braces, but there's real science in that preference. The science of the brain's love of symmetry and left-right eye movement + refocusing's contribution to eye fatigue and headaches. Why Allman hasn't forced out all others on principals of body science alone is beyond me. Excessive eye movement leads to fatigue, when searching for braces, up and down are always necessary. Left and right eye movement are only necessary should one prefer a brace style based upon inefficient design. :p |
on the subject of casing variables I use cased names for globals and all lower case for local variable, it makes it easy to see what the scope is at a glance.
|
Several guidelines for the pragmatic programmer:
(1) Don't try to re-write it. Just don't. I know that it stinks. Find a way to make the modification that best suits the particular trouble-ticket or change-order that you have been assigned. Don't go beyond the scope of that. (2) Make your code look, and work, like what's there. Follow their de facto standards for variable naming, indentation, and so on. Use the objects that they used. I know that it stinks. Don't make it more evident than it already is that fifty people have been working on it for thirty years. (3) Test the holy hell out of it, before you commit your branch, and don't merge it right away into the trunk. Your modification should include automatically-runnable(!) tests which will verify that the requirements of the ticket or change-order have been exactly met. Someone else will be responsible for desk-checking your work and merging it into the trunk. (4) Don't write anything that has not been ordered. If the change-order or trouble-ticket does not expressly authorize it, don't do it. I know that it stinks. But, these are the changes – the only changes – that have been signed-off on from above. The rest of the code-base either works now, or contains defects that are probably known. (If you discover another defect, write it up.) Your change has been approved because it is understood to be limited and focused. So, it must be. |
Quote:
|
Quote:
I wonder if it could be maybe made a bit littler subversion package. |
My list is much shorter. In descending order of importance, every good programmer should be:
1. Arrogant 2. Stubborn 3. Slightly crazy 4. Intelligent |
Quote:
That's not much helpful, rather off-topic :( |
Quote:
Even if you're working excusively as a one-person team. |
Quote:
|
Quote:
I don't think that SystemD is that bad at all, since it only depends on the Unix-like distributions. To smooth off the comment, there is actually some good in SystemD. Next to PulseAudio, it allows to have a nice system ideally suited for Ubuntu. Ubuntu isn't that bad, and SystemD is a nice add-on. SystemD is perfect for the desktop target of Ubuntu. Pulseaudio, Avahi, SystemD and more to come is a good solution for an Unix-like OS as Ubuntu. Ubuntu users usually do not matter that much that it is pulseaudio, sysD, or other. They are focused on desktop use. Despite the programming content, Pulseaudio and SystemD are being widely used and look rather stable enough for these type of distributions and for full user satisfactions. Ubuntu has become a widely installed and used distribution. Nothing to say, Ubuntu is present as a recognized Linux distribution. (DE: https://www.golem.de/news/lennart-po...10-109649.html) |
:idea: Can we leave systemd out of one discussion please?
Other than that... rules+=1: Don't mix code. That is not to say, you can't use more than one language, just make sure they're in separate files of the appropriate suffix. |
Quote:
- - - - - As for personal qualities, I would really prefer, "reliable, disciplined, and easy to work with." Don't need ego, and you're not as intelligent as you think you are, or if you are then nobody really cares because they know of course that they are much smarter than you. :) The one thing that I say to any team that I work with is: "Don't surprise me. :tisk: I don't like surprises." |
Quote:
|
Quote:
|
Quote:
|
I also worked in C, with embedded sql. C is not really portable, python/perl/java are (for example) much better to write platform independent code. (although you can do that with c too).
|
Quote:
(Well do I remember the days years decades ... koff koff ... when they were not!) |
Quote:
As an aside, there is an effect that is humbling when you work with hundreds of people that you KNOW are brilliant, highly trained, far more intelligent than you are, but who have NO idea what is going on under the hood when it comes to the technology. You, you are the expert, but you will never be the expert that nurse is. Even the med tech saves more lives and has more training. You learn to be careful when you speak to people, and value what they bring to the table. From there I went to working on a team, where I could actually share the load (AND the blame) and count on other eyes to help catch anything I might miss. OH MAN I learned to love teamwork! Having someone else to QA your design, advise on things you might not know about the effects or value of a project, QA your final product, or stage testing or deployment FOR you is priceless! Alas, we do not always have that luxury. |
Quote:
A few years ago my preference was BASH (Very quick to complete, very portable, several other could maintain it, and I could hand a solution off to clients and they might even know what to do with it if something went wrong later), PERL (requires few or no external calls, PERL library hell is achievable but also avoidable, very portable, very fast, nicely maintainable, somewhat less readable), C/C++ (Compiled, mixed results on portability but very good, compiled static has fewer dependencies than even PERL, execution time gold, less maintainable unless you have a team of coders, a clear preference if you are coding a solution that will not need enhancement, maintenance, or modification for a LONG time). These days I might add MONO, Python, FreePascal (Because I have found it oddly useful for things that nothing else does, it has built-on protections missing from C/C++, compiles to somewhat static executable, and I like the style), RUST, FORTH (For embedded), FORTRAN (for matrix work), or COBOL (Object Oriented Cobol: and because nothing does government forms better). It really depends on the project, the data, the maintenance and support profile, and the time restrictions. Ultimately, you have to let the problem dictate the solution, and the solution recommend the tools. |
All times are GMT -5. The time now is 10:02 AM. |