61

Is there any good reason to supply a 32-bit version along with a 64-bit version of any software targeted at modern desktop machines, running modern 64-bit operating systems on 64-bit hardware?

It seems that 64-bit software would be more efficient, allow for higher memory usage if needed, etc. Apple even uses 64-bit processors for their phones, even though they only have 1-2 GB of RAM, way below the 4 GB limit for 32-bit CPU's.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Filip Haglund
  • 2,823
  • 3
  • 16
  • 20
  • 18
    Not every modern machine runs 64 bit OS – Bálint Apr 15 '16 at 09:54
  • 4
    Do you have any examples? – Filip Haglund Apr 15 '16 at 09:57
  • Most windows tablets run a 32 but operating system. I link some examples later. Mine what goes in to this category too. – Bálint Apr 15 '16 at 09:59
  • 9
    Ask your customers. – Murphy Apr 15 '16 at 09:59
  • I personally see any machine with only than 4gigs of ram as basically unusable for daily work, and both os x and windows 10 are 64bit by default (you have to ask for a 32bit version if you really want it), and hardware has been 64bit for like 15 years now. – Filip Haglund Apr 15 '16 at 10:02
  • 24
    Rhetorical question: is there a reason to supply a 64 bit version of any software since most modern 64bit operating systems allow to run 32bit and 64bit applications as well? – Doc Brown Apr 15 '16 at 10:03
  • 1
    More registers, more memory, better performance in general? – Filip Haglund Apr 15 '16 at 10:04
  • 2
    Not a duplicate @gnat. That question is about fitting a timestamp, and a developer id in the error code returned when a program exits. – Filip Haglund Apr 15 '16 at 10:05
  • 2
    I can't vouch for elsewhere but well-established businesses in Britain tend to be very behind on technology. In the town where I live the busses have a screen to tell people what the next stop is, and sometimes it bluescreens and restarts, revealing it's Windows XP. One of my parents has worked in offices all their life and at their current place of employment the computers all run XP using a mixture of outdated software and new stuff crudely bolted on top. I wouldn't be surprised to learn that 32-bit machines are awkwardly common in business. – Pharap Apr 15 '16 at 12:27
  • This really raises the question: Why are 32-bit machines still in common use and commonly manufactured? – Panzercrisis Apr 15 '16 at 15:54
  • Some insight on this topic: [http://www.hanselman.com/blog/PennyPinchingInTheCloudYourWebAppDoesntNeed64bit.aspx](http://www.hanselman.com/blog/PennyPinchingInTheCloudYourWebAppDoesntNeed64bit.aspx) – neilsimp1 Apr 15 '16 at 16:12
  • 1
    @FilipHaglund lmao, your response to "not everyone has a 32-bit machine" was to ask for a citation, are you kidding me? "64-bit hardware has been available for years now" does not translate to "everyone has 64-bit hardware and OS". that's a really blinkered and almost elitist assumption – underscore_d Apr 15 '16 at 21:13
  • @underscore_d Sorry, I haven't seen a computer without 64bit hardware since 2010 except small raspberry pi's and similar. Of course there are still 32bit machines, I just couldn't think of any example of 32bit desktop hardware. No offense intended. – Filip Haglund Apr 15 '16 at 21:48
  • Apple doesn't hasn't accepted any 32-bit only software for iOS for the last two years, so users of 64 bit iPhones have no choice, and developers of new software have no choice. – gnasher729 Apr 16 '16 at 11:08
  • @DocBrown: On MacOS X, if your computer actually runs 32 and 64 bit apps at the same time, then 32 and 64 bit libraries need to be pulled on, so there's a huge advantage running _only 32 bit_ or _only 64 bit_ software. Since nowadays _most_ software is 64 bit, one 32 bit app comes at significant cost. 10 years ago, running _one_ 64 bit app amongst all 32 bit apps was a significant cost. – gnasher729 Apr 16 '16 at 11:13
  • @underscore_d: Everyone who will ever consider paying for software that you write has a 64 bit computer :-) – gnasher729 Apr 16 '16 at 11:16
  • @rhughes: your links tells quite the opposite than your comment. Typo? – Doc Brown Apr 16 '16 at 11:59
  • 1
    @gnasher729: my comment should only point out that the OP has asked his question in an IMHO very biased tone like "64 bits are clearly better than 32, so why should we still use this 32 bit crap?" - which is nonsense. There is no general "one is better than the other" in here. Luckily, he got good answers which pointed that out. – Doc Brown Apr 16 '16 at 12:06
  • 1
    Kind of related: An interesting discussion on why Visual Studio is not 64-bit: https://blogs.msdn.microsoft.com/ricom/2009/06/10/visual-studio-why-is-there-no-64-bit-version-yet/ – rhughes Apr 17 '16 at 00:40
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/38519/discussion-on-question-by-filip-haglund-is-there-a-good-reason-to-run-32-bit-sof). – yannis Apr 18 '16 at 09:15

4 Answers4

83

Benefits of 32-bit software in 64-bit environments

  • Lower memory footprint, especially in pointer-heavy applications, 64-bit vs 32-bit can easily double the memory requirements.
  • Object files are smaller as well.
  • Compatibility with 32-bit environments.
  • Memory leaks are hard capped to 2 GB, 3 GB, or 4 GB and won't swamp the entire system.

Drawbacks of 32-bit software in 64-bit environments

  • 2 GB, 3 GB, or 4 GB memory limit per process. (Just per process, in sum multiple 32-bit processes may use the full available system memory.)
  • Not using additional registers and instruction set extensions depending on x64. This is highly compiler and CPU specific.
  • May require 32-bit versions of all (most Linux distributions) or uncommon (most Windows versions) libraries and run time environments. If a 32-bit version of a shared library is loaded exclusively for your application, and that counts towards your footprint. No difference at all if you are linking statically.

Other aspects

  • Drivers are usually not an issue. Only user-space libraries should differ between 32-bit and 64-bit, not the API of kernel modules.
  • Beware of different default widths for integer datatypes, additional testing needed.
  • The 64-bit CPU architecture may not even support 32-bit at all.
  • Certain techniques like ASLR and others depending on a much larger address space than physical memory won't work well (or at all) in a 32-bit execution mode.

Unless comparing a very specific CPU architecture, operating system and library infrastructure here, I won't be able to go into more details.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Ext3h
  • 1,364
  • 9
  • 12
  • 10
    _"The 64bit CPU architecture may not even support 32bit at all."_ Is this more of a theoretical concern, or does this exist in the world? – mucaho Apr 15 '16 at 15:54
  • 11
    @mucaho There certainly _have been_ 64-bit-only CPU architectures, such as the Alpha and the IA64. Both of those are moribund, though. I don't know off the top of my head whether there are any _currently produced_ 64-bit-only architectures - AArch64, maybe? Does anyone know whether 32-bit ARM is a mandatory component of that? – zwol Apr 15 '16 at 16:13
  • 11
    @zwol No, 32-bit is not mandatory for ARM, and neither is 64-bit. There are 64-bit only ARM CPUs, while others support both 32-bit and 64-bit processes. – Ext3h Apr 15 '16 at 16:16
  • 2
    For CPU-bound video games, in every case I've tested I've found that the x86 build performs better than the x86-64 build _(examples: Starcraft 2, various Unreal Engine games, various .Net XNA games)_. I think it's a combination of better cache-coherency, plus more mature compilers for x86. – BlueRaja - Danny Pflughoeft Apr 15 '16 at 16:24
  • 1
    Saying 2GB/3GB per process is incorrect if we are talking AMD64 compatible CPUs. The AMD64 architecture allows 32 bit applications to use the full 4GB virtual address space as long as the kernel is 64 bit. – kasperd Apr 15 '16 at 17:26
  • 1
    @BlueRaja-DannyPflughoeft: It's too bad that the scaled-segment concept used in the original 8086 was never extended to 32-bit segment registers, since even a very simplistic form with a fixed 16x scale factor would have allowed .NET to use 32-bit references to access 16-byte-aligned objects within a 256GB address space, and adding a little sophistication would have allowed memory to be partitioned into a small-object area with 16-byte alignment and a large-object area with 4096-byte alignment. – supercat Apr 15 '16 at 17:26
  • 3
    There is an additional benefit to simply picking one architecture and sticking to it: simpler development and testing. – jl6 Apr 15 '16 at 17:38
  • 2
    @BlueRaja-DannyPflughoeft on which platform did you test? On Linux (for me) 64-bit games perform usually better. – ljrk Apr 15 '16 at 18:27
  • Incidentally, 32 bit alpha has always existed. See Windows NT Alpha. – Joshua Apr 15 '16 at 20:34
  • 8
    @Joshua Always existed? Did the pharaohs know this? – candied_orange Apr 15 '16 at 22:05
  • 3
    How can 64 bit "easily double" memory footprint? Maybe if every instruction is a pointer to a long constant, but that seems very unlikely. – user949300 Apr 16 '16 at 03:31
  • 4
    *"64bit vs 32bit can easily double the memory requirements"*, not quite easily... Most programs have things other than pointers (such as code, strings and so on). – hyde Apr 16 '16 at 08:12
  • 1
    @hyde As this [Wikipedia article on 64-bit computing](https://en.wikipedia.org/wiki/64-bit_computing#32-bit_vs_64-bit) shows, 64-bit CPUs have 64-bit registers and 64-bit data and address lines. Thus, everything is moved in and out of the CPU 64-bits at a time. If only 32-bits (for example) are significant for a given operation, then only half the register is being used. The other half is wasted. Moving 8 bytes into the CPU would still require two fetches unless the compiler optimized the 32-bit code to collapse the 2 fetches into one. Two fetches take the same amount of time, regardless. – DocSalvager Apr 21 '16 at 18:26
8

The difference between 32 bit software and 64 bit software is the size of the pointers, and maybe the size of the integer registers. That's it.

That means all pointers in your program are twice the size. And (at least on an ILP32/LP64 architecture) your longs are twice the size as well. This typically works out to about a 30% increase in object code size. This means that …

  • your object code will take ~30% longer to load from disk into RAM
  • your object code will take up ~30% more space in memory
  • you have effectively lowered your memory bandwidth (for object code) by ~20%
  • you have effectively lowered the size of the instruction cache by ~20%

This has a non-negligible negative effect on performance.

Doing this only makes sense if you can "buy back" those performance costs somehow. Basically, there are two ways to do this: you do a lot of 64 bit integer math, or you need more than 4 GiByte mapped memory. If one or both of those is true, it makes sense to use 64 bit software, otherwise it doesn't.

Note: there are some architectures where there are no corresponding 32 or 64 bit variants. In that case, the question obviously doesn't make sense. The most well-known are IA64, which is only 64 bit and has no 32 bit variant, and x86/AMD64 which are, albeit closely related, different architectures, x86 being 32 bit only, AMD64 being 64 bit only.

Actually, that latter statement is not 100% true anymore. Linux recently added the x32 ABI, which allows you to run AMD64 code with 32 bit pointers, so even though that's not a "proper" CPU architecture, it is a way of using the AMD64 architecture in such a way as if it had a native 32 bit variant. This was done precisely because the performance overhead I mentioned above was causing real measurable, quantifiable problems for real-world users running real-world code in real-world systems.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
  • 8
    What about the extra registers and instructions in amd64 compared to x86? How much does that improve performance? – Filip Haglund Apr 15 '16 at 11:44
  • 2
    Google for "tagged pointers" used in Objective-C on MacOS X and iOS. Very substantial amounts of objects have no memory allocated whatsoever but the whole object is faked within the pointer on 64 bit systems. (I heard Java does something similar). In C++, std::string on 64 bit often contains up to 22 characters in the object itself without any memory allocation. Substantial memory savings and speed improvements. – gnasher729 Apr 15 '16 at 13:53
  • 3
    Size of pointers and integers is it? What about the larger address space and additional registers in most 64 bit architectures? –  Apr 15 '16 at 14:18
  • 1
    _"you've [reduced] the instruction cache by ~20%"_ is moot since the instruction set is completely different _(and often more efficient)_ – BlueRaja - Danny Pflughoeft Apr 15 '16 at 16:26
  • 3
    "This has a **non-negligible** negative effect on performance." While this statement is true in an absolute sense, it ignores the fact that the vast, vast majority of applications' performance bottlenecks are not in load time, or memory usage/bandwidth, or number of instructions in the cache. – Ian Kemp Apr 15 '16 at 22:26
  • 1
    @FilipHaglund Not always. I tested CPU-bound code on both builds under Windows on a server-class machine (Xeon) and found the 32-bit was faster! The extra registers don't make up for the loss of cache due to larger structure sizes and larger code size. Unless you are using specific features enabled in x64 like 64-bit multiplication or super floating point SIMD, the memory usage wins. – JDługosz Apr 16 '16 at 03:42
  • 1
    This is wrong! There are plenty of differences other than "size of the pointers, and maybe the size of the integer registers". The main one is additional instructions. – Navin Apr 16 '16 at 08:30
  • 1
    @FilipHaglund Depends very much on what you're doing. For RSA and Elliptic Curve Crypto a factor 2 advantage due to 64 bit registers/multiplications is quite possible. For other algorithms there is no advantage at all and you needlessly pay the cost of double sized pointers. – CodesInChaos Apr 16 '16 at 11:00
6

If the software needs to interface directly with legacy systems, drivers or libraries, then you may need to supply a 32-bit version, since AFAIK the OS generally (definitely Windows and Linux AFAIK) doesn't allow mixing of 64-bit and 32-bit code within a process.

For example, if your software needs to access specialty hardware, it's not uncommon for customers to operate older models for which only 32-bit drivers are available.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Michael Borgwardt
  • 51,037
  • 13
  • 124
  • 176
  • 2
    You *can* mix 32 bit and 64 bit in the same process in both Windows and Linux: http://stackoverflow.com/q/12716419/703382 – Navin Apr 16 '16 at 08:28
  • 1
    @Navin: But is it practical? Could you use a [COM component](http://en.wikipedia.org/wiki/Component_Object_Model) in a 64-bit Windows application (e.g. a .NET application marked as *Any CPU* running on a 64-bit version of Windows)? – Peter Mortensen Apr 16 '16 at 11:34
4

If your software is a DLL, you MUST provide both 32-bit and 64-bit versions. You have no idea whether the customer will be using 32-bit or 64-bit software to talk to the DLL, and the DLL has to use the same bit-length as the application. This is non-negotiable.

If your software is a standalone executable, it's less clear. If you don't need your software to run on older OSes, you may not need to provide a 32-bit version. Just stick to 64-bit, specify that it requires a 64-bit OS, and job done.

However if you do need your software to run on older OSes then you may actively NOT want to provide a 64-bit version. If you have two versions then you have double the testing, and properly testing software across a range of OS versions and languages is not a quick process. Since 32-bit software runs perfectly happily on a 64-bit platform, it's still fairly common for software to be released only as 32-bit, especially by smaller developers.

Also note that most mobiles are 32-bit. Maybe some high-end ones are 64-bit now, but there's little compelling reason to make that step. So if you're developing cross-platform and might want your code to run on Android as well, staying 32-bit is a safe option.

Graham
  • 1,996
  • 1
  • 12
  • 11
  • 1
    I would argue against your position on reduced testing. I would Instead argue to test on multiple platforms particularly with not only different register sizes but with different byte orders just as an easy way to increase testing and catch subtle errors. In addition I would also do testing on computers that do not meet your recommended minimum hardware requirements as that will also expose additional issues that might not show up otherwise except with very large data sets. – hildred Apr 15 '16 at 15:55
  • 1
    @hildred With unlimited testing resource, I'd agree. In practise though, if you are in more control over your target then you may not need to do this testing immediately. It's not at all an "easy way" either - for sure you can simulate some of these platforms in a VM, but if you need physical hardware set up then this is involves large amounts of manual (non-automatable) work. It may save you from writing a test harness to test this explicitly, but it's not free by any means. – Graham Apr 15 '16 at 16:47
  • 1
    Not free, but downright cheap. If you limit your off platform testing to automated tests, the occasional idiot test, used hardware Aside from your setup your costs for successful tests after your initial setup would be limited to power and about 7 man minutes per test pass. The cost for failed tests would of course be higher, but those would usually be worth more (there is always hardware failure). This type of setup is particularly useful to c programmers because it readily exposes a certain class of pointer problems that are otherwise hard to track down. – hildred Apr 15 '16 at 17:00