Big Endian vs. Little Endian
I guess programming is the best place to put this...
Does anyone have a hard-set view on which architecture is better? I know this was an issue when Macs changed from PowerPC (running in big endian mode) to Intel processors (that are little endian). For those who don't know, it's a bit arcane programming thing where the pointer to a variable either points to the most significant byte (big endian) or the least significant byte (little endian) in a multi-byte variable. So if you have a 64-bit variable, a big endian machine would have pointers to the highest byte of the 8-byte variable, while little endian machines would point to the smallest byte. I.e.: Code:
0x1111 2222 3333 4444 I was just wondering if anyone had more informed opinions. |
|
Quote:
|
It has been a while since I had to think about the wild endians!
Back in the '70s (anyone remember them?) I did a lot of real-time motion-control development on 6800/68000, and 6502 chips which were all big-endian. That was almost entirely done in assembly, which really wasn't bad even on the 68000s. Late 80's I wrote a complete operating system for the 8051s (some instances of which are still in use!), also entirely in assembly language. The 8051s were mixed big/little-endian. I forget the nitty-gritty, but as I recall 16-bit addresses had to be presented as big-endian while everything pushed onto the stack was little-endian. But still, I don't recall any frequent confusion from the programmer point of view. I did some early x86 in assembly, but everything in surviving memory has been in higher level languages and I mostly don't think much about it. As to which one is "better"? Given the basic tools applicable to a given architecture, no strong reason for one over the other from the software point of view comes to mind. Even when keeping it all straight "up there" from a 68000 in the morning to a Z-80 in the afternoon, I don't recall any real problem. From a hardware designer perspective... that was too long ago and I don't remember... ;) |
From a moderator point of view, if you have a specific question about endianness, programming is the right place.
Otherwise, depending on how this thread progresses in the next few posts, we may move over to General. |
for me it was only a issue with 16 bit imaging data
it was a mess for a bit 16 bit signed MSB and LSB and 16 bit unsigned MSB and LSB |
Sorry, not quite on topic but I prefer my dates endian -- that is that I prefer today be rendered as 2017/06/10 or 10/06/2017.
As to which is best in hardware it appears neither is better. |
I still find little-endian arcane beyond belief. Especially the byte reversal. But big-endian no longer pays the bills, so who cares ... :shrug:
|
Big endian has a few advantages. It's a bit easier to see them when working with numbers bigger than the CPU can handle. For example, if you want to sort an array of such numbers, you can use qsort and memcmp to compare them. And if you want to output them as hex, you can start at the beginning and simply output the bytes.
And we all write numbers as big endian. With zero/space padding they're easier to sort. I wish we would do it with dates as well, but I think the problem is that nobody would say the year first when you need to tell somone a date. |
Quote:
Most of my experience has been with little-endian, since that what Intel processors are. So naturally, that's what I view as natural and easiest to work with. I sometimes encounter a pointer to a variable of unknown size. Is a "long" 2, 4 or 8 bytes? Depends on the processor. How about "size_t"? Usually it's the same as "long", but not always. On a little-endian machine, it doesn't matter. Just cast the pointer to the shortest size it might be. Unless the value is enormous it'll work just fine. But on a big-endian machine you have to cast it to exactly the right size or it won't work at all. So I think little-endian gives a little grace which has the advantage over big-endian. |
Quote:
When I started doing assembly language programming back in the 1980s, most platforms were based around either the Zilog Z80 or the MOS 6502. I've worked with both, and although I absolutely loved the Z80, there's something to be said for the 6502's simplicity. |
Quote:
I agree, the 6502 was a great device to work with! Simple but capable, allowed for rapid prototyping of many ideas, which made it seem like magic at times. |
Quote:
Apple programming guidelines are to keep your code endian-neutral (even if you have to use macros), since they reserve the right to change processors at any time. But Intel seems to be the only game in town now. They had to abandon the PowerPC since it had been mainly taken over by IBM which only cared about mainframe processors with unlimited power requirements, while Intel was coming up with new low-power architectures that were still fast and suitable for laptops. |
Having started with Motorola and associated partners of them, I have to say big endian. Also having worked with communications protocols and Internet protocols, I prefer Network Byte Order (NBO) which is also big endian. Prior to the more universal adoptions of host to network and network to host functions, we had to always have home built helper functions or macros and you had your own proponents of the names and conventions and so there was some general anarchy back in the day.
Good times! |
A bit off topic, but this got me thinking about the Itanium and its future. I guess it is bi-endian, but Intel has announced the latest version will be its last, so it's become a dead (or almost dead) architecture.
|
All times are GMT -5. The time now is 09:24 AM. |