ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Is there a 100% guaranteed standardized super duper way to determine if a system is big endian or not at compile time? Any precompiler macros that are defined on all big endian systems and on no little endian systems or anything like that? Every time I check I feel like it's a hack, with things like:
Well, obviously I made that up. But I can't figure out if there's a standard, compiler-independent way to check endianness at compile time. I always come up with stuff that works but it always feels so dirty and unreliable.
Yeah, but I want to avoid runtime checking just so I can avoid branching in inner loops that may or may not need to perform byteswapping. I have found, though, that using code like this actually works quite well with optimizations turned on:
if (0x00FF & *(unsigned short *)"a")
/* do little endian stuff */
/* do big endian stuff */
Because the condition is constant and the compiler will optimize the branch away. But I don't particularly like that method.
Thanks, primo. That link was helpful. I was pretty sure that generating headers at compile time would be the best way to go but I was hoping there would be a cleaner way. It's too bad there's no standard defines or anything like that, it seems like something that would have been useful. Oh well.
I could, but see post #3. I want to avoid repeated runtime checks. Also, AFAIK, there's no 8-byte hton* function, so binary doubles are still an issue. The other problem with those functions is that network byte order is big-endian, but I usually use little endian machines so I might as well keep everything little-endian for the sake of efficiency.