2

From what I've read (other questions on the site, etc...) the vast majority of desktop systems have little-endian architectures, and Windows doesn't even support big-endian. Now I'm wondering if it's even worth the extra effort to deal with endianness for the small number of big-endian desktop systems (if there even are any) out there. The application in question is 64-bit only and portable (Windows and Linux) in case that's relevant.

A practical benefit of going exclusively little-endian would be saving on htonl/ntohl conversions for network communication, allowing raw binary data to be sent directly (from the application to other instances of itself on a remote machine). The performance difference would be negligible, but reducing code complexity is quite attractive.

Is there a compelling reason to support big-endian on desktops? Are big-endian systems even being used for desktops these days?

Wingblade
  • 207
  • 2
  • 6
  • see [On discussions and why they don't make good questions](https://softwareengineering.meta.stackexchange.com/q/6742/31260) – gnat Jul 31 '18 at 15:59
  • endianess is a function of the cpu architecture, not the OS. Some architectures like Sparc v9 or ARM have bi-endian support, but those are irrelevant on the desktop and still support little endian. Well, that depends on your definition of desktop (where do ARM devices like chromebooks, ipad pros, and raspberry pis fit in?). But aside from that, all desktop devices use the little-endian AMD64 architecture (which you likely mean by “64 bit”). – amon Jul 31 '18 at 16:32
  • @amon Desktop would include laptop-style personal computers, so chromebooks count but ipads/pis don't. Is AMD64 aka x86_64 the only 64-bit architecture used in desktops? If so my 64-bit only limitation would also imply little-endian. – Wingblade Jul 31 '18 at 16:56
  • Related, if not a duplicate: https://softwareengineering.stackexchange.com/q/316566/20756 – Blrfl Jul 31 '18 at 17:48
  • Rumors about ARM-based Apple laptops are not extravagant. – mouviciel Aug 01 '18 at 10:17

3 Answers3

4

Note that 'network byte order' is big endian, so if you are transmitting any standardized structures, you will need to do that conversion.

Generally, most people avoid this issue by transmitting data in a textual form (like XML or JSON).

There are no major, very popular chipsets using big endian, so it may not be a pragmatic problem for you. But things have a way of changing. And code has a way of getting lifted from one place, and used in another.

I'd write the code the conventional way, supporting network byte order, as this will be the least surprising thing to do. As you say, the performance costs are really minimal. And consider using a textual format like JSON. That makes this entire issue go away, and has other benefits as well (easier to read traffic dumps by people, and easier to leverage other tools expecting data in JSON or XML format).

Lewis Pringle
  • 2,935
  • 1
  • 9
  • 15
  • 2
    Textual formats aren't always a viable option, e.g. if a lot of data is being sent (state synchronization for games, audio streaming for VoIP, etc...). Still a fair point, though. – Wingblade Jul 31 '18 at 16:59
  • 3
    Textual formats impose HUGE amounts of overhead. – John R. Strohm Jul 31 '18 at 17:20
  • when you say 'overhead' - are you referring to space or time? If space, then I disagree. If time, then certainly some, but how much depends on comparison to what else you are doing. – Lewis Pringle Jul 31 '18 at 17:22
  • 1
    @LewisPringle Can you elaborate on "if space, then I disagree"? I'm not aware of any text formats that don't incur (usually large) space overhead. – Wingblade Aug 01 '18 at 09:50
  • @Wingblade Inline compression with protocols like HTTP works beautifully. It compresses the data down to - typically - LESS than the size of binary protocols (alone), and shows up in viewers (again typically - depends on viewers) as uncompressed data so easy to debug. This costs a little performance (for compression side, less on decompress side). But saves a little on system performance because of copying less data (net net a loss of CPU usage probably but can reduce latencies due to less data transmission). – Lewis Pringle Aug 01 '18 at 14:05
2

Don't just assume things in your code. You never know how long it will be in use.

I personally know products in use since the early 1990s, moving from 68000-based MacOS via PA-RISC HP-UX, then x86 Linux to currently x86 Windows. There were quite some changes of architecture, endianness, filename syntax etc. during that timespan.

So if your code willingly doesn't support e.g. big-endian CPUs, write a unit test that fails if run on such a machine. Then in 10 years of time, when your future colleages move your code to some new architecture, they get a clear indication why that won't work out-of-the-box.

Ralf Kleberhoff
  • 5,891
  • 15
  • 19
0

In the last four years I haven’t seen any code that depended on endianness, or could have been made simpler or faster by making assumptions about it.

gnasher729
  • 42,090
  • 4
  • 59
  • 119