ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi. I'm programming a small C++ program. I put in two global constants, like so:
Code:
// variable names have been changed to protect the guilty
const unsigned long int HIGHEST_VAL = 0 - 1;
const unsigned long int SECOND_HIGHEST_VAL = 0 - 2;
The idea is that HIGHEST_VAL is always equal to the largest unsigned long int number, and that SECOND_HIGHEST_VAL... you get the picture. But would these lines have the desired effect when compiled on all systems with any size of a long int? It works this way on two x86 and amd64 systems, but I'm a little suspicious because, I would assume, "0 - 1" is evaluated before it is assigned, and so maybe some system might put in, for example, the max value of a 32 bit number into a 64 bit variable.
[Edit: I always compile with g++, though I suppose other compilers would be relevant to the discussion.]
Keep in mind that your code will require the system to perform a pow() on each use of HIGHEST_VAL. Fortunately, the multiplication is likely to be optimized out. In any case, the portable/standard way to do this is via limits.h. Is there any situation where your macro is the optimal case? (I'm also still not sure what the typecast to char is for. In fact, I would think it would break things when a char overflows...)
Valid point, but if you're lacking limits.h, you can always go for left-shift operator.
Also: pow only returns a double, powl (which is non-standard) would be needed for longs. (And you need to subtract 1 from your values to get the maximum that can be stored.)
it wouldnt overflow unless the var is bigger then 128bits
i once encountered a non standard compiler
that had no limits.h but did have a pow function
Seeing as this is C++ did it not have limits no ".h", also long long is non standard (C++). Then there is the fact the 8 is to represent the CHAR_BIT yet that is not defined as the size.
If I had to go a platform specific way with made some assumptions personally this would be my choice
Code:
const unsigned long ULONG_MAX_I_DO_NOT_HAVE_LIMITS(~0ul);
Matir your code will just set the highest bit to one and then negate one which is not what is wanted.
Actually it will not do that, I wonder if anyone can see what it will do?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.