[SOLVED] A general question about compiler warnings
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
People who start programming are usually told to take compiler warnings seriously, or even to compile with the -werror flag which treats warnings as errors. That's because they often are errors. For example, if the types in an assignment don't match, the chances are quite high that you have accidentally used the wrong variable name. The result will be compilable but it won't do what you intended.
But when you build packages from source, there are always loads of warnings and you learn to ignore them. Why don't the people who write the code follow their own guidelines?
Well you're assuming that they're the same people, which isn't necessarily the case.
But even when they are the same, there are various potential reasons.
For example, some people get stuck at places where code needs to be "working" and there "isn't time" to make it well-written; the developer might always intend to go back later and fix it, but doesn't get the chance. If they later find themselves helping a new developer, it's natural to advise against repeating their mistakes.
Or perhaps the code was simply written against a previous version of the language/compiler which didn't feature those warnings, and because nobody has raised issues about the warnings the developer assumes all is fine.
That's a good question Hazel. C allows the programmer so much leeway that warnings are invaluable in helping find issues. Real problems can hide in lists of warnings.
I keep my warnings turned up and clear them. My cflags typically looks like this:
Personally I put in as many compiler switches as I can to catch warnings and I also treat them as errors so that the build fails.
And would prefer to maintain this in work code.
Boughtonp has several good points about the authors, long term maintenance, inheriting code bases, etc.
I will also add that if you purchase code from a company, such as a software stack, or algorithm, the same applies. There can be warnings and then you're inheriting them. Many of the vendors do strive to ensure that they do not have them, and are also responsive if you report finding them, or other bugs. To summarize there: If the vendor is not responsive, does not take you seriously, this tells you something about how professional they are and how serious they are at selling software source.
In my workplace, we have several tools, astyle and Klocwork are some of them, which we use to check our code as we perform continuous integration.
I can say that with the several consumer products and their various software builds which I do work with, we have no warnings, they are not allowed.
Meanwhile, nobody outside of our organization would know this, because they are not building our code, only we are doing that.
I can't speak for Linux package compilations, except to agree with you that there are warnings when there shouldn't be.
Here's my current list, but I haven't revised it in a few years:
Code:
-O0 // This sets optimization off, not a forever recommendation, use selectively
-ggdb // This compiles for GDB debugging capabilities
-Wformat=2
-Werror
-Wall
-Wextra
-Wswitch-default
-Wswitch-enum
-Wstrict-prototypes
-Wmissing-prototypes
-Wmissing-declarations
-Wmissing-noreturn
I was talking about all those packages that go to make up Linux. I've never built any commercial software, but I've been building and using LFS/BLFS for years. Before that, I used Crux, which is source-based. There are always loads of warnings in those packages. Why is that?
You can write almost anything in C. Most of them can be compiled and executed. Warnings mean some kind of "illogical" programming or contravention of some "natural" rules.
Usually these warnings means incorrect logic or improper handling of variables/structs/whatever, but allowed, still can be compiled and executed.
The syntax checker tries to find all of these problems. Sometimes the programmer really knows if the reported issue was a false positive and ignored. Sometimes we know the program is working well, nobody will modify anything just for eliminating those warnings. Sometimes we inherit a piece of code from somewhere and don't want to alter it at all.
Oh yes, additionally new compilers will give more warnings....
It's like drawing anime or cartoons, only not like drawing anime or cartoons, at all.
You do live drawings, realistic, studies of the anatomy, bones, fat, etc.
And once you know those rules, you bend them into style.
But that's comparing apples and oranges since programming is logical and art is subjective, but both also deal with breaking of rules.
It can be deliberate.
The C64 for example thrived from exploits and tricks not immediately intended by the engineers. I mean, one version of the sound chip had a flaw where every time a filter was changed, a click was produced.
This click was exploited to generate waveforms/pcm playback despite the chip not having that.
(Note,details might be sketchy of my retelling, but its a fact that one version of the SID was exploited for that effect.)
So even something revolutionary might come from flipping the compiler (or hardware) the bird, not just "oh the software had to be shipped or we'd have lost our heads" or other such things.
Jaywalking is also great, entire road free of traffic? It's a bad move and you'd be warned to not do it cause it's frowned upon or maybe even carry a fine but...there's no danger right now.
Oh yeah, I think "the fastest possible delegates in C++" from some long time ago I think were created by not adhering to strict standards, too and probably many, many other things.
I would guess the number of times when expertise above and beyond explains a large number of warnings is rare. Excepting legacy code I would say, for the few times breaking the rules is required, an "expert hacker" might also know a pragma that can quiet the warning. In any event I don't expect the large number of warnings found in some code bases to be a result of that.
I'm grateful for warnings when the compiler reminds me that this:
Code:
size_t s_len = strlen (str);
...
for (int i = s_len; i >= 0; --i)
Is going to spin its wheels forever. It's a trivial example but errors hidden in endless warnings hide such things.
Meaning that people like me must program correctly but longtime hackers can break the rules if they like because they know how to get away with it.
The only formal rules are the syntax of the language, but they are not draconian as you're seeing, there are true errors from compiling and there are warnings.
I'd prefer that it not be treated as "people like me" concept. I can't control what anyone else thinks, but I do not feel it should be classified as anything like novices, experts, intermediates, and etc. We're all programmers of some type, talent, and experience, however it has been plainly shown that there are plenty of programmers with extensive experience who are ignoring warnings and perpetuating that versus doing something about it.
I know you were talking about Linux source distributions and packages. I absolutely cannot speak for the authors and maintainers of those.
My experiences as cited have been commercial and while things have gotten better, there clearly were, and continue to be times where the whole code base is h$%# in a handbasket, there seem to be elusive bugs in a variety of locations, one team is accusing another, or things like this. Some voice of sanity says, "Well, we're going to fix this!" And they're high enough of authority that they win. As part of that, the fact that there are xxxx warnings becomes an issue and there's a global effort to reduce/eliminate them, and see that more are not introduced. Meanwhile, it's always been about delivery, there's always one or two that came from source we bought and it doesn't seem to be correct to fix those because when we update their code, it'll be repeated, so we try to speak to the vendor of the software, or things like that.
it's not a perfect world.
Think of other situations in the world where things are allowed to be sloppy. Hopefully it is not a catastrophic failure, like the O-rings on the US Space Shuttle, but obviously that is an example of a very large integrations effort which failed somewhere and allowed a critically dangerous situation to exist. Once all the circumstances lined up to allow that failure to occur, it obviously did so at the worst possible time. But when people are delivering new commercial product software, priorities can be conflicted.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.