ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I figured that might be the case, but the point still stands that a while loop doesn't fit the flow of the program which makes it easier to make mistakes as you did when summarizing. Basically repeating code is bad, and the while loop forces repeating the input statement. Looking at the rest of the code, it's clear that get-valid-input is also being repeated several times and should go into a function, like johnsfine said.
I am pleased, because you all seem to be saying what I was thinking.
@Sergei: Thanks for your acoustic example. I had a psuedo code example (above) of where I thought a do loop was necessary - I am pleased that you have provided a real life example.
It wasn't my code, and I recommended that the other guy abandon it, and start again with some clear thinking. I thought that it was important to do a design by commenting the methodolgy, That's what we did, except with a basic calculator, that had a basic design like this:
Code:
//Get a float
//Get an operator
//Get a float
//Calculate the answer
The GetFloat function has the error checking in it (like johnsfine said) - it uses scanf for simplicity. We are nearly there with having clear concise code (funny we have no do loops). The OP's code for the calculator was similar to the example code above - quite frankly a nightmare considering the simple nature of the problem, and he had 100 lines of code.
The problem with these novice coders is they tear into writing code, often without any or little idea of what they are doing. I suppose everyone has to start somewhere, but I have seen some shockers. Calling main (inside main) to restart the program was a beauty!!
You guys could have a field day with all these novices, but the problem is that one seems to be giving out the same advice over & over.
Finally, there have been no comments about my restricted Hungarian notation - do I assume the no news is good news. Wait - I shouldn't assume anything.
there have been no comments about my restricted Hungarian notation
I hate all forms of Hungarian notation. For me they interfere with seeing the meaning of the names when I look at chunks of code. Obviously, I can look at a name and mentally cut off the type info in order to read the rest. But even when working with a lot of Hungarian notation code, I can't make myself see it that way when I read code the way I would read an English sentence (which does not require consciously reading individual words).
On this topic, I'm just not as sure as on typical programming style topics that what I see a wrong is really wrong, rather than just wrong for me.
there have been no comments about my restricted Hungarian notation
I hate all forms of Hungarian notation. For me they interfere with seeing the meaning of the names when I look at chunks of code. Obviously, I can look at a name and mentally cut off the type info in order to read the rest. But even when working with a lot of Hungarian notation code, I can't make myself see it that way when I read code the way I would read an English sentence (which does not require consciously reading individual words).
On this topic, I'm just not as sure as on typical programming style topics that what I see as wrong is really WRONG, rather than just wrong for me.
This seems to fall into that category of question where one camp says "Whoaaaa, there! Too unconventional! It causes people to create bugs!", and the other camp who expects competent readers to understand the constructs supported by the language, and to know the circumstances under which any particular construct makes sense. I stand squarely in the latter camp. I don't (usually) feel bad about using programming language features that are unconventional or obscure, and I don't light my hair on fire when I see things written in ways I don't immediately grasp.
OTOH, I do try to write code to suite my audience, if I know who that is. I would not, for example, knowingly use some obscure language element in a tutorial code sample in these forums. If I was writing code that would be expected to be efficient in some way, and know that other programmers who might read it would be skilled in using the language, I would feel quite at ease to use whatever language construct the situation called for. Sometimes, bending the code around to satisfy some other programmer's sensibilities seems counterproductive and awkward.
If I encounter code that is written in some way that seems unconventional, I start with the notion that it was written that way for some good reason. Sometimes that turns out to be true, and other times I think it does not. Many times, it forces me to think about the problem in a different way than I might naturally. Many times, it teaches me a new technique.
In other words, I don't think there is just one correct answer.
Hungarian notation is by large used completely wrong. It's not supposed to indicate that's already said by the type. The original point of the notation is to indicate type information that isn't indicated by *just* the type. For example, a double could actually be a length, area, or volume, and it's sometimes useful to indicate that tersely as a prefix so that it's easy to visually do some basic dimensional analysis.
Quote:
I'm wondering what you're getting at with the #define vs const thing here.
I'm not sure either, since #define is a *terrible* way to express constants, because #define doesn't have any kind of scoping, and the compiler doesn't care either way.
For instance, if you decide you only want single float precision, now you have to go and change the name to fScaleFactorArray everywhere you use it.
Maybe I am spoilt, I use KDevelop, so Find & Replace isn't a problem for me - shows how selfish I am.
Quote:
I'm wondering what you're getting at with the #define vs const thing here.
I was thinking that something like DAYSINWEEK is always 7, so why not a #define - was that one of the basic uses of a #define ?
Code:
#define DAYSINWEEK 7
Quote:
I read "us" as short for United States, which is a bit confusing.
Fair enough, maybe I should change it. Ha that could start a whole new debate ....
I guess that everyone has their own style, and if coding is your work, then you will probably have to conform to your organisations coding standard. The main thing would be "To be consistent".
I'm wondering what you're getting at with the #define vs const thing here.
I'm wondering who you were quoting that expressed that wrong opinion regarding #define
C and C++ don't have good support for compile time constants. I think that is a flaw in the language, but that is what we are stuck with. The keyword const does not make a variable into a compile time constant. It is still a variable.
To define compile time constant integers, I always use an enum. That is a bit counter intuitive (until you are used to it). But it does everything you want a compile time constant integer to do.
For other compile time constants, I have no good answer.
#define is a very bad answer because it has no reasonable scope limits. Once you work on any really big project, you will understand how important such scope limits are.
I have used kludges, such as an inline function (defined in an .hpp file) that returns the value of a function static variable (or for simpler types, just the value). That means you need constant_name() instead of constant_name every place you use it. Also you are making extra work for the compiler in a normal build and a distracting non inlined function in a debug build. Also, the optimizer sometimes fails to sort it all out, so you end up with worse code than a simple variable would give. But I don't know a better way, especially for the common case where the constant ought to be a member of a class and defined in the class .hpp file:
You wish a class definition could include something like
We were replying at the same time, so I didn't see this bit.
Quote:
I'm not sure either, since #define is a *terrible* way to express constants, because #define doesn't have any kind of scoping, and the compiler doesn't care either way.
Another thing I have learnt today then. Press Save !!
I was going on what was in K&R "C Programming", I can see that this is not good in C (scoping) and in C++, because of scoping , inheritance etc
...
For other compile time constants, I have no good answer.
#define is a very bad answer because it has no reasonable scope limits. Once you work on any really big project, you will understand how important such scope limits are.
...
When I was integrator of VLSI projects, we first defined naming conventions, something like this conceptually:
When I was integrator of VLSI projects, we first defined naming conventions, something like this conceptually:
That's a (not-that-great) solution to a problem that wouldn't have existed in the first place if you used something with decent scoping.
If you use a static const variable, and the compiler can determine you never ever try to take the address of it (so there's no pointer shenanigans to to try to ninja-modify the value), the compiler is legally allowed to treat it as a true constant (and almost all compilers do, when optimizations are turned on).
Maybe I am spoilt, I use KDevelop, so Find & Replace isn't a problem for me - shows how selfish I am.
Sure, Find & Replace is easy, but it's still an extra manual step that can be forgotten/screwed up. Plus, if you are using version control (and you are, of course ) then your diff is now needlessly bigger and maybe harder to merge with other people's changes.
But again, what is the upside to Hungarian notation?
Quote:
I was thinking that something like DAYSINWEEK is always 7, so why not a #define - was that one of the basic uses of a #define ?
But you said using const for dEarthRadiusConst is better? What's the difference?
Quote:
Originally Posted by johnsfine
The keyword const does not make a variable into a compile time constant. It is still a variable.
Wouldn't a static const variable be inlinable by the compiler (I see tuxdev thinks so)? I haven't worked with floating point much, so the enum trick has always been enough for me.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.