LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   Big Endian vs. Little Endian (https://www.linuxquestions.org/questions/programming-9/big-endian-vs-little-endian-4175607639/)

Laserbeak 06-09-2017 05:48 PM

Big Endian vs. Little Endian
 
I guess programming is the best place to put this...

Does anyone have a hard-set view on which architecture is better?

I know this was an issue when Macs changed from PowerPC (running in big endian mode) to Intel processors (that are little endian).

For those who don't know, it's a bit arcane programming thing where the pointer to a variable either points to the most significant byte (big endian) or the least significant byte (little endian) in a multi-byte variable.

So if you have a 64-bit variable, a big endian machine would have pointers to the highest byte of the 8-byte variable, while little endian machines would point to the smallest byte. I.e.:

Code:

0x1111 2222 3333 4444
  ^              ^
  |              |--- Little endian pointer points here in memory
  |
  |----------Big endian pointer points here in memory

I think big endian seems more "right" but I also see the advantages of little endian (like pointers to different sized variables remain the same).

I was just wondering if anyone had more informed opinions.

hydrurga 06-09-2017 06:02 PM

Don't forget middle endian! ;)

https://en.wikipedia.org/wiki/Endianness#Middle-endian

Laserbeak 06-09-2017 06:21 PM

Quote:

Originally Posted by hydrurga (Post 5721107)

To quote Mr. Comey Oh Lordy!... Just when I thought I had it figured out! Hehe...

astrogeek 06-09-2017 06:58 PM

It has been a while since I had to think about the wild endians!

Back in the '70s (anyone remember them?) I did a lot of real-time motion-control development on 6800/68000, and 6502 chips which were all big-endian. That was almost entirely done in assembly, which really wasn't bad even on the 68000s.

Late 80's I wrote a complete operating system for the 8051s (some instances of which are still in use!), also entirely in assembly language. The 8051s were mixed big/little-endian. I forget the nitty-gritty, but as I recall 16-bit addresses had to be presented as big-endian while everything pushed onto the stack was little-endian. But still, I don't recall any frequent confusion from the programmer point of view.

I did some early x86 in assembly, but everything in surviving memory has been in higher level languages and I mostly don't think much about it.

As to which one is "better"? Given the basic tools applicable to a given architecture, no strong reason for one over the other from the software point of view comes to mind. Even when keeping it all straight "up there" from a 68000 in the morning to a Z-80 in the afternoon, I don't recall any real problem. From a hardware designer perspective... that was too long ago and I don't remember... ;)

astrogeek 06-09-2017 07:03 PM

From a moderator point of view, if you have a specific question about endianness, programming is the right place.

Otherwise, depending on how this thread progresses in the next few posts, we may move over to General.

John VV 06-10-2017 01:01 AM

for me it was only a issue with 16 bit imaging data
it was a mess for a bit

16 bit signed MSB and LSB
and
16 bit unsigned MSB and LSB

273 06-10-2017 08:03 AM

Sorry, not quite on topic but I prefer my dates endian -- that is that I prefer today be rendered as 2017/06/10 or 10/06/2017.
As to which is best in hardware it appears neither is better.

syg00 06-10-2017 08:25 AM

I still find little-endian arcane beyond belief. Especially the byte reversal. But big-endian no longer pays the bills, so who cares ... :shrug:

Guttorm 06-10-2017 09:19 AM

Big endian has a few advantages. It's a bit easier to see them when working with numbers bigger than the CPU can handle. For example, if you want to sort an array of such numbers, you can use qsort and memcmp to compare them. And if you want to output them as hex, you can start at the beginning and simply output the bytes.

And we all write numbers as big endian. With zero/space padding they're easier to sort. I wish we would do it with dates as well, but I think the problem is that nobody would say the year first when you need to tell somone a date.

KenJackson 06-10-2017 01:09 PM

Quote:

Originally Posted by Laserbeak (Post 5721104)
Does anyone have a hard-set view on which architecture is better?
...
I was just wondering if anyone had more informed opinions.

Ah! You're asking for opinions on a religious question. :)

Most of my experience has been with little-endian, since that what Intel processors are. So naturally, that's what I view as natural and easiest to work with.

I sometimes encounter a pointer to a variable of unknown size. Is a "long" 2, 4 or 8 bytes? Depends on the processor. How about "size_t"? Usually it's the same as "long", but not always. On a little-endian machine, it doesn't matter. Just cast the pointer to the shortest size it might be. Unless the value is enormous it'll work just fine.

But on a big-endian machine you have to cast it to exactly the right size or it won't work at all. So I think little-endian gives a little grace which has the advantage over big-endian.

Ser Olmy 06-12-2017 05:14 AM

Quote:

Originally Posted by astrogeek (Post 5721121)
and 6502 chips which were all big-endian.

The 6502 is most definitely little-endian.

When I started doing assembly language programming back in the 1980s, most platforms were based around either the Zilog Z80 or the MOS 6502. I've worked with both, and although I absolutely loved the Z80, there's something to be said for the 6502's simplicity.

astrogeek 06-12-2017 11:57 AM

Quote:

Originally Posted by Ser Olmy (Post 5721868)
The 6502 is most definitely little-endian.

When I started doing assembly language programming back in the 1980s, most platforms were based around either the Zilog Z80 or the MOS 6502. I've worked with both, and although I absolutely loved the Z80, there's something to be said for the 6502's simplicity.

Did I say that?! Solar flare resulting in bit flip...

I agree, the 6502 was a great device to work with! Simple but capable, allowed for rapid prototyping of many ideas, which made it seem like magic at times.

Laserbeak 06-12-2017 12:45 PM

Quote:

Originally Posted by Ser Olmy (Post 5721868)
The 6502 is most definitely little-endian.

That's interesting, I kind of assumed it was big-endian since it was the Apple ][ processor. But I never learned assembly at all until the Macs looks over Apple ][s and the MC68000 was definitely big-endian. So Apple went from little-endian to big-endian to bi-endian (but in big-endian mode) with the PowerPC then back to little-endian with Intel.

Apple programming guidelines are to keep your code endian-neutral (even if you have to use macros), since they reserve the right to change processors at any time. But Intel seems to be the only game in town now. They had to abandon the PowerPC since it had been mainly taken over by IBM which only cared about mainframe processors with unlimited power requirements, while Intel was coming up with new low-power architectures that were still fast and suitable for laptops.

rtmistler 06-12-2017 12:48 PM

Having started with Motorola and associated partners of them, I have to say big endian. Also having worked with communications protocols and Internet protocols, I prefer Network Byte Order (NBO) which is also big endian. Prior to the more universal adoptions of host to network and network to host functions, we had to always have home built helper functions or macros and you had your own proponents of the names and conventions and so there was some general anarchy back in the day.

Good times!

Laserbeak 06-12-2017 01:03 PM

A bit off topic, but this got me thinking about the Itanium and its future. I guess it is bi-endian, but Intel has announced the latest version will be its last, so it's become a dead (or almost dead) architecture.


All times are GMT -5. The time now is 09:24 AM.