50

OK, I feel stupid asking this - but in Jeff's article: Getting the Interview Phone Screen Right and originally stated in the 5 essential phone screen questions:

They shouldn't stare blankly at you when you ask with 2^16 is. It's a special number. They should know it.

I've been a developer\software engineer\code monkey\whatever for a little while now, and I don't think I've ever come across this. I mean, I can certainly count binary values do basic operations on them, etc, etc. But I don't see what is "special" about this value.

javamonkey79
  • 875
  • 2
  • 10
  • 13
  • 6
    It's a power of 2? :) – Michael K Feb 01 '11 at 17:56
  • 2
    @Michael `2^16.1` is also a power of 2, but nothing special. Same for `2^0`. –  Feb 01 '11 at 19:21
  • 4
    @Radek: `2**0` is `1`, which is in fact a very special number ;) But yes, we're generally only concerned with powers of two where the exponent is a positive integer. –  Feb 01 '11 at 19:38
  • 1
    Well, why is 10^3 or 10^6 a special number? –  Feb 01 '11 at 22:14
  • 4
    Is this an interview question from the 90s? 16 bits were special back then – Chris S Feb 01 '11 at 23:11
  • 1
    I read the linked article. I wonder why Martin did not bother to test his code. – leed25d Apr 23 '13 at 06:25
  • 1
    @user1249 `10^3` & `10^6` are very special numbers, 1 kilobyte & 1 megabyte respectively, according to most hardware manufacturers – recursion.ninja Jul 19 '13 at 23:35
  • `2^16` is the 5th member of sequence [`2^(2^n)`](http://oeis.org/A001146), and as we scale our architectures by doubling the number of bits - all the numbers in this sequence are very special. – c69 Jul 20 '13 at 12:46
  • This makes me sad – JoelFan Sep 15 '14 at 17:53
  • @AgiHammerthief not according to the folks who manufacture your hard-disks. They interpret the SI prefixes literally to short you some bytes... drives me crazy. My comment was a joke referencing this silliness. – recursion.ninja Dec 18 '14 at 14:48
  • @AgiHammerthief *ignorance* is ignoring the fact that [in 1998, the International Electrotechnical Commission (IEC) officially standardized a new set of base-2 SI prefixes](https://en.wikipedia.org/wiki/Binary_prefix), such that a kilobyte (KB) is now literally 1,000 bytes as its SI prefix implies, and a *kibi*byte (KiB) is 1024 bytes. A good programmer ought to keep up with standards. Especially drafts that are damn-near 20 years old. – Braden Best Nov 02 '18 at 06:45
  • @BradenBest I know what ignorance is and I'm choosing to ignore the SI/IEC definition that uses powers of 10, because I was taught that 8-bit to 64-bit computers work with powers of 2, not 10. – Agi Hammerthief Nov 02 '18 at 06:51
  • @AgiHammerthief I don't see what that has to do with keeping your terminology up-to-date. When I learned the difference between kilo- and kibi-, I adapted easily, and I'm very familiar with binary, octal and hex, enough such that I can trivially implement things like a 16-bit PRNG or a base64 codec, or reason about chmod's behavior in *NIX. Knowing binary, and how it relates to computers, has nothing to do with wanting to follow standards. If you're not willing to change your terminology, at least don't assume that people who insist that kilo means kilo are ignorant to how computers work. – Braden Best Nov 02 '18 at 07:06

3 Answers3

83

(216 - 1) or 65535 or 0xFFFF or "64k" is the maximum value of 2 bytes. For a long time CPUs used 16-bit architecture and OSes were likewise based on 16-bit operations and "words". There were 16-bit commands and 16-bit memory addresses. A lot of systems/compilers still use 16 bits for integers.

So, (216 - 1) is special because it is the largest number that a 16-bit (unsigned) integer can hold and the largest memory address that a 16-bit architecture can access.

Travis Christian
  • 1,851
  • 17
  • 21
59

From the full body from Steve Yegge's article,

Candidates should know what bits and bytes are. They should be able to count in binary; e.g. they should be able to tell you what 2^5 or 2^10 is, in decimal. They shouldn't stare blankly at you when you ask with 2^16 is. It's a special number. They should know it.

I was thrown off from the bit you quoted in the question; it sounded like a candidate should be able to describe it's significance, but in context he's saying that candidates should know, off the top of their head, what the decimal conversion of 216 is.

The significance of this is that since we humans still use decimal for counting, especially in our heads (in most circumstances), we need to know the rough capacities of the common byte blocks that we use for storage, memory, or even character encoding. Since a byte is 8 bits, the most common are 8, 16, 24, 32, and 64.

At the present time I would say 232 is the most commonly occurring capacity a developer deals with. I am suspicious of developers who don't know that 232 is roughly 4 billion (max value of ~2 billion if signed), since it means they've never bothered to find out roughly how many records can be stored in their databases that use 32-bit ints for primary keys, or when old code using 32-bit ints for IDs, dates, etc. will need to be refactored to 64-bit.1

216 is the total capacity of the Java short. (Total numbers between -215 and 215-1)

A developer should know by heart what 8-bit is. Among the many common usages is ASCII character encoding.

I wouldn't expect a programmer to know 214 or 218 at all, but I would probably expect that they know 216 since it's a very commonly occurring number and short enough number (65536) to easily remember the full number.


1: If you browse the leaderboards of Call of Duty: MW2 or iPhone Game Center you'll often see cheaters at the top with high score values of 2,147,483,647, which is 231-1, the max value of a signed 232 integer.

Nicole
  • 28,111
  • 12
  • 95
  • 143
  • 22
    A good thing to remember is that 2^10 is about one thousand, and from there you get 2^20 is a million, and 2^30 a billion and so on. And with help of this it's somewhat easy to "see" that 2^14 is sixteen million, so you can quickly do mental conversions between binary and decimal. And 2^64 is 16 * 10^18, which is hmm... 16 billion billions :) – Maglob Feb 01 '11 at 18:38
  • @Maglob, thanks, good point. It's worth noting how you arrived at 2^64, since it took me a minute. 1000 is 10^3 which is roughly 2^10. 64 is 4 and 60. 60/10 is 6, 6 * 3 is 18, so you get: 10^18 * (2^4 or 16) ... – Nicole Feb 01 '11 at 18:45
  • 2
    Uuh. I made a typo above (and somehow can't edit it) 2^14 is of course sixteen *thousand*, not million. (I guess it was not that easy to "see" after all :) – Maglob Feb 02 '11 at 11:39
  • +1 Good answer, because a more complete context is that you are looking for the person being able to think, and in particular to have knowledge of the context and domain. – quickly_now Apr 23 '13 at 06:49
  • 4
    Um, ASCII is 7-bit.. ;-) – Brendan Jul 20 '13 at 09:46
  • (2^31-1) is probably more relevant than (2^32-1). I don't know the exact value, only the range -2'000'000'000 to +2'000'000'000. – Florian F Sep 15 '14 at 20:58
  • @Maglobob just be aware that the approximation gets less accurate as the numbers get bigger. To two significant figures 2^64 is actually 18 * 10^18 – Peter Green Oct 16 '16 at 02:42
  • I know who 8-bit measure is by heart. . . what is 8-bit? You might as well expect me to know what "blue" is by heart. As things go, that's a pretty meaningless statement. – iheanyi Jul 28 '17 at 21:41
3

The only reason I can see for regarding 216 as "special" is because it's one more than the highest integer you can store in a single register on a 16 bit operating system.

Similarly you could apply the same logic to 232 and 32 bit operating systems.

I'd need to know more context for the question before being able to say whether it was a significant piece of knowledge or not.

ChrisF
  • 38,878
  • 11
  • 125
  • 168
  • I swear I must have filtered out most of what you wrote. All I saw was 2^16 and highest integer. – ChaosPandion Feb 01 '11 at 17:55
  • 7
    This is inaccurate. It is perfectly possible to have a 32-bit (or higher) data type in a 16-bit operating system. A 16-bit integer is simply the largest value that can be stored in a single register on a 16-bit processor. – Chinmay Kanchi Feb 01 '11 at 23:37
  • @Chinmay - point taken ;) – ChrisF Feb 01 '11 at 23:38