[SOLVED] C preprocessor define for 32 vs 64 bit long int
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I want to compile some C code on two machines, one is 32 bit Linux, the other 64 bit FreeBSD.
How can i determine via a preprocessor define which architecture it is compiling on? Specifically, i want to know how a long int is defined without using sizeof(long) or other run-time logic.
On the Linux 32 bit machine, i can look at the definition of __BITS_PER_LONG, or __WORDSIZE, both of which are defined as 32. I can't find either of these definitions on the 64 bit FreeBSD server
In my case i can use #ifdef __linux__ , but that is hardly the proper method. I think there must be some universally available preprocessor define to determine 32 vs 64 bits, but - what is it?
You could check 'ULONG_MAX' from 'limits.h'
It can be 4294967295 or 18446744073709551615 (or something else, as it is platform dependent).
That sounds reasonable, but on the 64 bit FreeBSD machine 'ULONG_MAX' is still defined as 4294967295, ('LONG_MAX' is defined as 1 [??]). But sizeof(long) returns 4 on the 32 bit machine, and 8 on the 64 bit server.
without using sizeof(long) or other run-time logic.
As far as I know "sizeof" is compile-time, although NOT pre-processor, but the C compiler itself. Note that the default size of a long is - in gcc - controllable by a compile-time option:
Code:
-m32
-m64
-mx32
Generate code for a 32-bit or 64-bit environment. The -m32 option sets "int",
"long", and pointer types to 32 bits, and generates code that runs on any
i386 system.
The -m64 option sets "int" to 32 bits and "long" and pointer types to 64 bits,
and generates code for the x86-64 architecture. For Darwin only the -m64
option also turns off the -fno-pic and -mdynamic-no-pic options.
The -mx32 option sets "int", "long", and pointer types to 32 bits, and
generates code for the x86-64 architecture.
As far as I know "sizeof" is compile-time, although NOT pre-processor, but the C compiler itself. Note that the default size of a long is - in gcc - controllable by a compile-time option:[CODE]
-m32
-m64
-mx32
You are correct, of course: sizeof() is determined at compile time. Pardon my inaccurate use of terms.
As for the -m64 option, it is not available on my 32 bit machine, and, anyway, what i want is for the same C code to produce run-time binaries for either machine as their native 32- or 64-bit architectures allow, while making some logic determinations based upon which architecture it is compiled on.
My (temporary?) workaround to follow, as a seprate post
I've found that limits.h on the Linux machine looks at whether '__x86_64__' is defined, to determine the definition of '__WORDSIZE', and that '__x86_64__' is indeed defined on the 64 bit server. So, am doing this:
printf("size of int = %d\n", sizeof(int));
printf("size of long = %d\n", sizeof(long));
printf("long_max=%d\nulong_max=%u\n", LONG_MAX, ULONG_MAX);
printf("bits_per_long=%d\n", __BITS_PER_LONG);
produces this output on the 32 bit Linux machine:
Code:
size of int = 4
size of long = 4
long_max=2147483647
ulong_max=4294967295
bits_per_long=32
and this output on the 64 bit FreeBSD server:
Code:
size of int = 4
size of long = 8
long_max=1
ulong_max=4294967295
bits_per_long=64
You can see that both 'LONG_MAX' and 'ULONG_MAX' appear to be unreliable, and it SEEMS better to rely upon '__x86_64__'. But will not yet mark this thread as solved, unless someone in LQ land can either confirm this, or point me to a more universal and reliable method.
If you are using the Visual C++ compiler (or the Intel C++ compiler on Windows). You can use the '_M_X64' macro to know the architecture.
The type 'long' is always 4 bytes, on both 64bit and 32bit architectures (https://docs.microsoft.com/en-us/cpp...p?view=vs-2019)
You can use the type 'long long' for 64 bits integer.
With CLang, GCC and Intel (on Linux) you can use the '__x86_64' to know if you are in a 64bit architecture.
You can use the macro '__LONG_MAX__' to know the size of the type long
The Intel C/C++ compiler is peculiar, because it behaves like the Visual C++ on Windows and like GNU C on Linux.
The macros to detect each compiler are:
Code:
_MSC_VER Microsoft Visual C++
__clang__ LLVM C compiler
__INTEL_ COMPILER Intel C/C++ compiler
__GNUC__ GNU C compiler
Thank you for affirming my use of '__x86_64'. Can i be sure that this will be defined on all GCC implementations on 64-bit machines? (should have specified that am only interested in GCC)
'__LONG_MAX__' is defined as -1 on the 64 bit FreeBSD server, still apparently as meaningless as 'LONG_MAX'
Is there a way to mark a thread as 'completely confused'?
Upon starting to implement the above, i find that on the 64-bit FreeBSD server, the ULONG_MAX definition as 4294967295 is indeed correct even though sizeof(long) is 8. When i perform a calculation that exceeds the above max, the long integer becomes negative, or truncated. If a long integer is 8 bytes (64 bits), but its effective max is only 32 bits, what are the other 32 bits used for?
Is this a bug in GCC as implemented on the 64-bit server?
printf("size of int = %d\n", sizeof(int));
printf("size of long = %d\n", sizeof(long));
printf("long_max=%d\nulong_max=%u\n", LONG_MAX, ULONG_MAX);
printf("bits_per_long=%d\n", __BITS_PER_LONG);
Quote:
Originally Posted by dogpatch
Is there a way to mark a thread as 'completely confused'?
Upon starting to implement the above, i find that on the 64-bit FreeBSD server, the ULONG_MAX definition as 4294967295 is indeed correct even though sizeof(long) is 8. When i perform a calculation that exceeds the above max, the long integer becomes negative, or truncated. If a long integer is 8 bytes (64 bits), but its effective max is only 32 bits, what are the other 32 bits used for?
Is this a bug in GCC as implemented on the 64-bit server?
If you are talking about the above code, then the problem is that %d takes an int, so if you pass an 8 byte long, it gets truncated. If you compile with -Wall you should get a warning about it.
If you are talking about the above code, then the problem is that %d takes an int, so if you pass an 8 byte long, it gets truncated. If you compile with -Wall you should get a warning about it.
What a noob am i! Thank you.
Likewise thanks to NevemTeve for his original reply to my question.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.