7

I played with 8-bit machines (C64), I used 16-bit machines (Win 3.1), I enjoyed flat 32-bit address space (Linux).

Every time as a user (which was also means being a developer for fun or profit) I felt the need for more bits and welcomed the next wider bit architecture. Accessing to bigger memory space got easier, graphics got better and sound was a bliss after 8-bit.

However when marketing drums started to kick in for 64-bit some years ago I thought, "ok, this is Internet age, they can use this kind of power to do more calculations easier and commodity hardware manufacturers want to spread their portfolio for this kind of market".

Professionally I became a "system programmer" working for mobile devices which some what means being an "embedded programmer" as well. Working mostly on 32-bit from the start ARM architecture based products, it was a while I forgot about different architectures other than being 32-bit.

Also when ARM announced a new 64-bit architecture towards end of 2011, I read it as ARM wants to go into server market, increasing their portfolio. Which makes sense.

Now with new iPhone 5S claiming to be the first 64-bit processor ever used on a smartphone, thoughts became a little bit unsettled in my head again. Mobile devices, being the ultimate personal devices are becoming 64-bit.

So I wonder, is there something I'm missing? What does being 64-bit offers to the users including programmers?

gnat
  • 21,442
  • 29
  • 112
  • 288
auselen
  • 361
  • 1
  • 6
  • The benefit for some desktop machines (for gamers and people who need more than 4 GB of RAM) and servers, the answer is obvious. Remains the smartphones part, which is a duplicate of a recently asked question. – Arseni Mourzenko Sep 13 '13 at 08:55
  • 1
    Hmm may be it can be considered as a duplicate, but I would like to see my question as a broader one about visible/senseable features 64-bit architectures, rather than iPhone 5S's 64-bit promotion. And please see my answer there. – auselen Sep 13 '13 at 09:05
  • 1
    it's broader, but again, for desktop PCs and servers, the benefits of x64 are obvious. – Arseni Mourzenko Sep 13 '13 at 09:07
  • for iPhone 5's 64-bit http://programmers.stackexchange.com/a/211382/102124 – auselen Sep 13 '13 at 09:17
  • 3
    @MainMa obvious how for end users? You can get more than 4GB of memory on 32-bit architectures. Heard of PAE? – auselen Sep 13 '13 at 09:20
  • 3
    PAE is an ugly kludge (i don't even call it a hack). mmap()'ing big files is a great way to do databases, no way to do it with PAE – Javier Sep 13 '13 at 13:56
  • @MainMa: There are advantages, but they are anything but obvious. They might be obvious to users of desktop version of Windows, where MS artificially limits 32-bit versions to 4GB. – vartec Sep 13 '13 at 14:48
  • Someone down voted the question. Is question really that poor? Funny. – auselen Sep 13 '13 at 15:07
  • 1
    Everyone needs to distinguish between **address width** from **data processing width**. On state-of-the-art servers, such as the 60-core Xeon Phi, address width remain at 64-bit, but data processing width has gone to 512-bits. Mobile processors might take a similar turn, with data-processing width expected to increase to take advantage of silicon and energy efficiency improves. – rwong Sep 13 '13 at 19:37
  • The obvious answer is support for more than 4GB of memory in a single address space. But in my experience it speeds up the code itself by a lot as well. A lot of the code I'm working with is twice as fast when running as 64 bit process. – CodesInChaos Sep 14 '13 at 15:53
  • You may find this recent Ask Slashdot to be useful (well, the articles - the comments are... slashdot). [Why Apple Went 64-Bit With the iPhone 5s](http://mobile.slashdot.org/story/13/09/13/2039224/why-apple-went-64-bit-with-the-iphone-5s) –  Sep 14 '13 at 20:07
  • I've been mostly using 64-bit architectures from around 1991. Could never understand how people tolerated limited 32-bit address space for so long. `man 2 mmap` for details. – SK-logic Sep 18 '13 at 09:06

3 Answers3

12

It's important to distinguish between 64-bit architectures in general and the 64-bit architectures we commonly see. In an abstract sense, a 64-bit architecture just gives you wider registers (bigger numbers and more addressable memory). Looking at concrete examples of architectures, you see that the 32 to 64-bit jump was used as an opportunity to make significant, incompatible, improvements in processor design.

The first one that comes to mind, looking at both x86_64 and ARMv8, you see a significant increase in the number of registers available. Both architectures have doubled the number of general-purpose registers on the processor. This greatly increases the opportunities for optimization of software. Similarly, they have improved vector processing capabilities. Moving to a new, incompatible, architecture gives designers to opportunity to remove little used features that were dragged along through the years for the sake of backwards compatibility.

Sean McSomething
  • 3,781
  • 17
  • 25
9

One major benefit of having a huge address space is that you can represent all storage simply as an address. For example, my laptop has 16 GiByte of RAM and 1 TiByte HDD. With 64 bit addresses, you can easily address all that storage using a unified API. 32 bit wouldn't be enough. I could even individually address all of my USB key fobs, SD cards, microSD cards, external HDDs, NAS, DropBox, Google Drive etc.

Even my phone has 1 GiByte RAM, 4 GiByte internal flash and a 32 GiByte microSD card.

Unfortunately, no operating system does this yet (with the exception of OS/400, which has been doing exactly this since 1989, although it actually has 128 bit addressing), but that would be one benefit.

Even without such futuristic changes, there is a rather simple reason: at the rate RAM capacity in smartphones is growing, we're going to hit the "4 GiByte wall" real soon now.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
  • 3
    There is no 4GB wall. You'll get more than 4GB on 32-bit mobiles. It is called "Large Physical Address Extension architecture" on ARM. See http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438c/CHDCGIBF.html – auselen Sep 13 '13 at 09:22
  • exactly, with LPAE there is *"1TB RAM wall"* :-P – vartec Sep 13 '13 at 10:42
  • 6
    @vartec The 4GiB wall is real and not misleading LPAE is just a workaround. – Pieter B Sep 13 '13 at 10:54
  • also when talking about addressing filesystems, flash sizes... systems use pages to address them and that gives you at least 10 bits extra. – auselen Sep 13 '13 at 12:43
  • PAE won't let a single process more than 4GB. it doesn't allow you to mmap() a file bigger than 4GB, much less a whole device. – Javier Sep 13 '13 at 13:54
  • @auselen, filesystem addressing isn't related to the CPU's bit width. unless you want to mmap() that storage, then you _do_ use byte addresses and _do_ need 64 bits to cope with big files/devices. – Javier Sep 13 '13 at 13:59
  • then I guess it boils down to this question: Is 4GB enough for ordinary user/programmer? An HD image is 8MB, 4K would be 32MB. Do you need to mmap more than 4GB files? etc. – auselen Sep 13 '13 at 18:14
  • Yeah, I would like to see mmap() become the normal way to access storage devices. Again and again we have to play games with buffers while dealing with files. Map it and all those games vanish, the whole file is simply there. It would also speed up save operations as the save would finish while the dirty pages were still being flushed to disk. – Loren Pechtel Sep 14 '13 at 18:06
  • A file system is not RAM, and I'm not clear why one would want to address it as RAM. The only way to write a single byte on a flash drive is to read a sector into a buffer, modify the buffer, and then write it back. Even of one's OS was willing to do that, performance would be positively dreadful. Further, what would a "pointer" to a block of a file mean if the file were truncated or deleted? – supercat May 27 '14 at 22:56
2

The fundamental advantage is larger register size ( and ALU capability ).

Basially, I can now multiply 10 trillion million billion by 7 in one step. Before with 32 bit I had to split the operands into two pieces that could fit in 32 bit registers and use off die memory to store intermediate results.

This doesn't seem like a big deal, until you look at how many time code written for a 32 bit processor has to split a math operand or split a four word hash key ( or high precision floating point ) or something into two pieces to sort or branch on it.

Andyz Smith
  • 853
  • 5
  • 12