ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Does anyopne know how to tell gcc (4.2) to use 64bit integers instead of 32bit? I have a program which gets an integer overflow (no, it's not bad code...) on a IA64 architecture and need to correct it.
Does anyopne know how to tell gcc (4.2) to use 64bit integers instead of 32bit? I have a program which gets an integer overflow (no, it's not bad code...) on a IA64 architecture and need to correct it.
Code:
/* size.c */
#include <stdio.h>
#include <limits.h>
int main()
{
printf("Size of int is %d bits\n", 8*sizeof(int));
printf("IN_MAX (signed) is %d\n", INT_MAX);
return 0;
}
To my surprise it does not make any difference for vanilla int's.
On my 32-bit computer (running Ubuntu "Gutsy"):
Code:
bash$ gcc --version
gcc (GCC) 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)
bash$ file size
size: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.8, dynamically linked (uses shared libs), not stripped
bash$ ./size
Size of int is 32 bits
IN_MAX (signed) is 2147483647
On my AMD64 PC (running Debian "Lenny"):
Code:
bash$ gcc --version
gcc (GCC) 4.2.3 20080114 (prerelease) (Debian 4.2.2-7)
bash$ file size
size: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.8, dynamically linked (uses shared libs), not stripped
bash$ ./size
Size of int is 32 bits
IN_MAX (signed) is 2147483647
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Quote:
Originally Posted by Hko
To my surprise it does not make any difference for vanilla int's.
This is not to disrupt the huge amount of source code assuming an int is 32 bit long.
The LP64 model which keep the int as a 32 bit entity was agreed upon most Unix vendors around 1995.
This is not to disrupt the huge amount of source code assuming an int is 32 bit long.
The LP64 model which keep the int as a 32 bit entity was agreed upon most Unix vendors around 1995.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.