5

When downloading various software packages, and executables for Windows, I always see two different types of executables to download. One just says ...32-bit and the other always says ...amd64. I know this has nothing to do with AMD, but it is referring to 64-bit operating systems, so why is this still a norm? Even large companies like Google and Ubuntu have packages set up like this.

Thanks for any insight!

~Carpetfizz

Carpetfizz
  • 483
  • 5
  • 11

1 Answers1

31

The 64-bit extension for 80x86 processors (nowadays called just x86) was invented by AMD.

Back then Intel was betting on the Itanium line for servers and even went on record saying that "64 bits won't be needed on the desktop anytime soon".

AMD, on the other hand, was producing the successful Athlon line, which for a short while was much faster and cheaper than Xeon chips. The 64 bit capability while retaining current software compatibility put AMD back on the map. Intel had to quickly license the 64 bit extensions, so now it could be said that Intel chips are AMD-compatible, and not the other way around...

Javier
  • 9,888
  • 1
  • 26
  • 35
  • Thanks this makes sense, I'll mark you as correct in 8 mins :) – Carpetfizz Aug 13 '13 at 14:16
  • 1
    Yep. Though, sometimes I wonder if we would have been better off if Intel had succeeded in their plan to replace the x86 architecture with IA-64. Thanks a lot, AMD :) – James Adam Aug 13 '13 at 14:29
  • Nice dive into the resent history of the chips ! – Yusubov Aug 13 '13 at 14:33
  • Would 64-bit processing really be needed on the desktop in the presence of an architecture which used 32-bit object identifiers to reference objects of up to 4GB each? Much as people maligned segmented architecture in the 1980's, it works well when languages make a distinction between "reference to start of an object" [one word] and "general-purpose pointer" [two words], one doesn't mind padding objects to fit with segment spacing [16 bytes in 1980], and one doesn't need individual objects larger than a maximum-size segment [64K in 1980]. IMHO, conditions today are such that... – supercat Aug 13 '13 at 15:27
  • ...even a simplistic addressing scheme where a 32-bits segment ID would be scaled by 16 and added to an offset could be more efficient than 64-bit linear addressing for individual applications that need less than 64 gigs of RAM each (since twice as many object references would fit in each cache line). Allow the upper top few bytes of each segment register to serve as a segment-table index (with configurable base addresses and scale factors) and one could retain the 32-bit object-reference size while going far beyond 64 gigs. – supercat Aug 13 '13 at 15:31
  • 2
    @JamesAdam: I dunno, I think the only architecture I've seen that's worse than x86 was IA-64. (Yeah, let's move instruction scheduling into the compiler -- that's a good idea!) – TMN Aug 13 '13 at 15:31
  • 3
    @TMN: Actually, moving instruction scheduling into the compiler *IS* a good idea. It simplifies the frack out of the silicon if you can throw away the gadgets needed to stall instruction until the operands are available. It also removes the need for the silicon to have all the smarts necessary to do the reordering in the first place. – John R. Strohm Aug 13 '13 at 15:42
  • @JohnR.Strohm: i remember being excited when I first read about VLIW designs (a few years before Itanium), but it would be interesting to hear what JIT programmers think about it. nowadays a huge portion of running code is generated by some JIT and not a compiler. – Javier Aug 13 '13 at 15:50
  • 1
    @JohnR.Strohm: Maybe it would have been good given a simpler architecture, but ISTR one of the reasons for slow acceptance of IA-64 was the lack of good compilers that could get the most out of the hardware. And Intel has one of the best compiler teams in the business. – TMN Aug 13 '13 at 16:18
  • 1
    VLIW for general purpose computing has proven to be an idea that was never able to be made to work as well in practice as theory suggested it would. ATI had a good run with their VLIW5 and VLIW4 GPU architectures; but they were helped immensely by Direct X 9's fixed 5 stage pipeline. VLIW4 was created as a transitional architecture when most new games went to DX10 which didn't have that sort of rigid pipeline because even in the simpler world of GFX programming the compiler was almost never able to pair more than 4 instructions. AMDs current GCN architecture has dropped VLIW entirely. – Dan Is Fiddling By Firelight Aug 13 '13 at 17:38