12

I find several sources claiming that power-of-two bits in a binary word (such as 8-bits per byte) is a "good thing" or "convenient". I find no source pointing out why.

From What is the history of why bytes are eight bits? we read in the approved answer:

Binary computers motivate designers to making sizes powers of two.

Ok, but why? In the same question but in the comment field for the question I find:

Is the last sentence in jest? A 12-bit byte would be inconvenient because it's not a power of 2. - robjb

Again, void of rationale...

address calculations are a heck of a lot simpler with powers of 2, and that counts when you're making logic out of raw transistors in little cans - Mike

As bytes are the smallest addressable unit, this does not make much sense. Lots of upvotes on the comment though. Maybe I missed something.

From Wikipedia:

The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte

And this would be convenient because...?

For clarification, this is about the number of bits per byte (e.g. 8 or 6, etc), not the number of values per byte (e.g. 28 or 26, etc). Because of the confusion I also point out this is not about Word sizes.

I´m not overly interested in historical reasons. Those have been well explained elsewhere (see links).


Related question on SO: https://stackoverflow.com/questions/1606827/why-is-number-of-bits-always-a-power-of-two

Andreas
  • 299
  • 1
  • 8
  • 1
    I think that before you can answer that question, you'll have to decide what "byte" means. Is it the smallest addressable unit of memory?, the smallest unit of data that can be transmitted over some interface?, the size of a character in some String data type? I cut my teeth on the PDP-10 architecture where all three of those were different sizes. – Solomon Slow Aug 09 '16 at 20:58
  • @jameslarge Being a C/C++ programmer I think of the byte as the smallest addressable unit of memory. Does that help? – Andreas Aug 10 '16 at 19:38
  • check the section "Middle Age" in the [top answer](http://programmers.stackexchange.com/a/81715/31260), it explains in thorough details why is power-of-two bits considered “convenient”. You might also enjoy reading subsection titled "Building a Turing Machine from Boolean Gates" although it is not as strictly related to your question – gnat Aug 10 '16 at 19:38
  • @gnat I disagree that the "Middle Age" section there explains anything about the convenience of power-of-two bits bytes. It's just a walk through of how to build the bare minimum binary NAND gate with transistors, nothing about combining multiple bits into one unit. – 8bittree Aug 10 '16 at 21:20
  • you got to be kidding. As soon as you start combining multiple bits into one unit, everything naturally becomes power of two – gnat Aug 10 '16 at 21:30
  • 3
    @gnat I'm pretty sure we're talking about the number of bits per byte (i.e. 8 in an 8 bit byte) here, not the number of values a byte can represent (i.e. 2^8 in an 8 bit byte). So if you have, for example, a 6 bit byte, 6 *is not a power of two*, but yes, a 6 bit byte can represent a power of two number of values. – 8bittree Aug 11 '16 at 15:35
  • 2
    @8bittree I think I got it, thanks for explaining! (retracted duplicate vote - though I think it would be easier for readers if an explanation like in your last comment would be [edit]ed into the question, this thing seems rather subtle) – gnat Aug 11 '16 at 15:51
  • 3
    Similar question on SO: http://stackoverflow.com/q/1606827/3723423 - the answer brings some plausible arguments about convenience – Christophe Aug 11 '16 at 19:08
  • 2
    @Snowman: The OP's post contains a "begging the question" fallacy: "Why are powers of two considered convenient byte sizes?" They aren't. It has nothing to do with powers of two; he misread the sentence in the Wikipedia article. – Robert Harvey Aug 11 '16 at 21:32
  • 4
    @RobertHarvey In the answer to "What is the history of why bytes are eight bits?" (also linked in my question) there is the following sentence: "Binary computers motivate designers to making sizes powers of two." Did I misread this too? What do both sources mean in your opinion? Just saying "you got it wrong" is not really doing it for me. – Andreas Aug 12 '16 at 22:47
  • @andreas: That statement is a bit misleading. It would be more accurate to say that "Computers are designed using powers of two because on-off (two-state) switches are best suited for digital computing. – Robert Harvey Aug 13 '16 at 02:32
  • @RobertHarvey i think that's a misconception. It's not because binary is more suited to digital but because binary appeared the most cost effective in the early days. There's a very clear and documented statement on that in Norbert Wiener's pioneer book "Cybernetics, or Control and Communication in the Animal and the Machine" in 1948 – Christophe Aug 13 '16 at 07:54
  • @Christophe: That's right. That's what I said. It costs less to manufacture two-state switches than it does 10-state switches. That makes them *more suitable.* – Robert Harvey Aug 13 '16 at 14:35
  • 2
    `As bytes are the smallest addressable unit, this does not make much sense.` -- Bytes are the smallest addressable unit on the memory bus, but you can still bring a byte into processor memory and work with it bit by bit. This is how microprocessors perform routine math operations. – Robert Harvey Aug 14 '16 at 03:00

6 Answers6

10

I don't think 8-bit bytes have been successful because they have a width which is a power of two. If you don't want to address bits individually -- and that's a common feature neither now nor in the past -- having a power of two is of no real practical importance (it's just -- now far more than in the past when sparing some discrete components was important -- a reflex for hardware and software engineers and staying in familiar ground is important for other purposes), and I don't remember having seen mentioned in my history of computing readings(1). One needed lower cases, that meant something more than the then dominant 6-bit character sets. ASCII was 7-bit, but ASCII was then though of purely as for inter-exchange (and thus to be translated to internal code for handling), and thus

The Subcommmitee recognizes that computer manufacturer are unlikely to design computers that use 7-bit codes internally. They are more likely to use 4-bit, 6-bit, and 8-bit codes. There is no widespread need at the present for interchange of more than 128 separate and distinct characters between computers, and between computers and associated input/output equipment. [paper tape, which had a natural frame size of 8 bits but needed parity so the payload of a frame was 7 bits is also cited in favor of 7-bit char for ASCII, power of two is not cited among the advantages of 8 bits] (2)

and for the hardware 8-bit byte won because it allowed to pack 2 decimal digits in one byte at a time when 75% of the data was numerical and represented in BCD(3).

(1) for instance Blaauw and Brooks, Computer Architecture; MacKenzie, Coded Character Sets, History and Development have both a discussion on that subject.

(3) Document of X3.2 -- the Subcommitee responsible of ASCII -- cited by MacKenzie.

(3) MacKenzie, again.

AProgrammer
  • 10,404
  • 1
  • 30
  • 45
  • 2
    Thank you. Your answer is spot on and you brought references. You have my vote. I realize though if what you say is true it is also impossible to prove. Can´t prove the non-existance of something. I guess I should really interogate the ones claiming "convenience" and check their sources. Maybe it is just a wide spread rumor. – Andreas Aug 11 '16 at 20:04
  • 1
    The other convienience factor is that a byte can be represented easly as two hexidecimal values. Putting two binary coded decimals (BCD) in one byte is more commonly referred to as packed decimal. This was indeed considered convenient because the decimals can be read as decimal when data is displayed in hex. – JimmyJames Aug 12 '16 at 18:03
  • 12 bit bytes can be represented easily as three hexadecimal values. And you can store three BCD numbers in a 12 bit byte. Surely that's a lot better than two hexadecimal values and two BCD numbers. Actually, a 10 bit byte can contain three decimal digits. And I think that's how the IEEE decimal floating point standard works. – gnasher729 Aug 12 '16 at 23:18
  • 2
    @JimmyJames, I think you get the causality reversed with hexadecimal. Hexadecimal became popular because it was a compact way to represent 8-bit byte, previously octal was far more popular (and it was more popular on a machine like the PDP-11 which had 8-bit bytes but where 3-bit fields was significant in the instruction set encoding). – AProgrammer Aug 13 '16 at 05:25
  • @gnasher729, the 8-bit byte is a child of the 60's. Going from 6-bit char to 12-bit char was unthinkable in the 60's. Even today when we are far less constrained UTF-8 is popular because UTF-16 is deemed too wasteful. A 10-bit byte was about as unthinkable and the 10 bit per 3 decimal digits encoding is also totally unpractical when you are examining values in registers and in memory with a front panel without speaking about the impact on implementation with the technology of the time. – AProgrammer Aug 13 '16 at 05:34
2

Other than historical accident, there is no particular reason why we should use 8 / 16 / 32 / 64 bit. I suppose that 12 / 24 / 48 / 96 bit would really be more useful.

For handling text, Unicode using a hypothetical UTF-24 would be cheaper than UTF32; hypothetical UTF-12 would store all single and double byte UTF-8 characters in 12 bits, and all triple and quad byte UTF-8 characters in 24 bits (the range would be slightly reduced to 2^20 characters, but that's still four times more than is generously used); code would be simpler because there are only two variants.

For floating point, 48 bit is usually enough. 96 bit is substantially better than 80 bit extended. 24 bit is useful for graphics; much more useful than the 16 bit supported by some graphics cards. 48 bit pointers can handle 256 terabyte.

About the only disadvantage is bit arrays, where a division by 12 is need to calculate byte positions. If that is felt to be important, I'm sure division by 12 can be implemented quite efficiently in hardware.

gnasher729
  • 42,090
  • 4
  • 59
  • 119
  • Interesting point about UTF, although being slightly off-topic. Floating point byte (or bit) size is an endless battle between memory and precision where you just have to live with one or the other. Good point about bit arrays too. – Andreas Aug 11 '16 at 20:22
  • 1
    Interesting thoughts, but I am not sure this answers the question. –  Aug 11 '16 at 21:01
  • 1
    The question was: "Why is eight bit considered convenient". Surely saying "it's not" answers the question. – gnasher729 Aug 11 '16 at 21:23
  • 2
    @gnasher729 The question was: "Why is **power of two** bits per byte considered convenient", although your answer seems to apply just as well. – 8bittree Aug 11 '16 at 21:26
  • 2
    Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/43934/discussion-on-answer-by-gnasher729-is-power-of-two-bits-per-word-convenient-i). – yannis Aug 14 '16 at 11:24
2

According to Wikipedia article for word, this makes calculations related to addressing memory significantly easier:

Different amounts of memory are used to store data values with different degrees of precision. The commonly used sizes are usually a power of two multiple of the unit of address resolution (byte or word). Converting the index of an item in an array into the address of the item then requires only a shift operation rather than a multiplication. In some cases this relationship can also avoid the use of division operations. As a result, most modern computer designs have word sizes (and other operand sizes) that are a power of two times the size of a byte.

vartec
  • 20,760
  • 1
  • 52
  • 98
  • 3
    Yes, power of two times the size of a byte. There's no inherent reason why a byte should be eight bits and not nine, twelve or fifteen. – gnasher729 Aug 12 '16 at 08:34
  • 1
    @gnasher729, much easier to divide by 8 (or 16 or 32 or 64) than it is to divide by 9 or 12 or 15. – robert bristow-johnson Aug 13 '16 at 01:50
  • @gnasher729 if word is power-of-2 bits, and power-of-2 bytes, this implies that byte has to be power-of-2 bits – vartec Aug 13 '16 at 02:17
  • 1
    @vartec The article and quote says "The commonly used sizes are usually a power of two multiple of the unit of address resolution (byte or word)" and "most modern computer designs have word sizes (and other operand sizes) that are a power of two times the size of a byte." I read "word size" is measured in bytes, not bits. There is no rule about word size in bits is or should be powers-of-2 in the article. – Andreas Aug 13 '16 at 08:38
  • @vartec: IF. Obviously nobody would build a machine with 32 bit words and 12 bit bytes. But nothing speaks against a machine with 48 or 96 bit words and 12 bit bytes. And there have been machines where a word was ten bytes. – gnasher729 Aug 13 '16 at 14:30
  • @robertbristow-johnson: Much easier? Any processor with a 64 bit multiplier can divide by 9 or 12 or 15 just as easily as by 8 or 16. And dividing by the bit size is _rare_. – gnasher729 Aug 13 '16 at 14:54
  • 1
    @gnasher729: division by power of two is just a bit shift. So no, not “just as easily” – vartec Aug 13 '16 at 16:22
  • @vartec: It's a shift instruction vs. a multiply instruction. It's just as easy. Once an instruction is available, even if harder to implement,it's just as easy to use as any other instruction. – gnasher729 Aug 14 '16 at 18:48
1

This is convenient due to common hardware architectures using multiples of 8, e.g. 32-bit and 64-bit architectures. This means greater efficiency when using 8-bit data storage and transmission.

"However, considerations of economy in design strongly push for one size, or a very few sizes related by multiples or fractions (submultiples) to a primary size. That preferred size becomes the word size of the architecture."

Word (computer architecture)

See also: What is the history of why bytes are eight bits?

Bradley Thomas
  • 5,090
  • 6
  • 17
  • 26
  • 6
    I will not accept this as an answer. My question is why power-of-two is convenient, not why defacto standard is 8-bit. And the history behind 8-bit mentions 5, 6 and 7 bits being used for real reasons, while going from 7 to 8 is motivated with a "meh, why not". I got the feeling reading different sources power-of-two had more to it than compatibility to current systems. (In reality the 8-bit gave 7-bit character sets parity.) Word is a different thing where I do get the benefit of power-of-two sizes, i.e. shift can be used instead of mult in calculations. – Andreas Aug 10 '16 at 19:51
  • @Andreas, the eighth bit gave "parity" to ASCII-based telecommunication systems. But I think a better answer to why eight bits for computer data paths comes from IBM systems, and the mapping between Hollerith card codes to bit-patterns in memory. (i.e., the character encoding that shall not be named.) – Solomon Slow Aug 10 '16 at 21:26
  • 1
    It's convenient because 1,2,4 and 8 all divide into 8. In the case of 7, that's a prime number and so only 1 and 7 divide into 7. – Bradley Thomas Aug 11 '16 at 13:18
  • 1
    @BradThomas "1,2,4 and 8 all divide into 8"? Could you explain the convenience with this property and tell me why 12 would be inconvenient? Thanks in advance! – Andreas Aug 11 '16 at 17:14
  • 1
    @Andreas: Power of two is convenient because **on-off switches are easy to build in large quantities.** That's the *only* reason we use powers of two. If it had been just as easy to build switches that have 10 states instead of two, we would have built *decimal* computers, not binary ones. – Robert Harvey Aug 11 '16 at 17:17
  • 3
    @RobertHarvey This question isn't about the number of states per switch (i.e. binary vs trinary or more), it's about how many switches to group together. See my edit to the question. – 8bittree Aug 11 '16 at 17:24
  • 1
    That is **not what you said.** You said "I will not accept this as an answer. My question is why power-of-two is convenient, not why defacto standard is 8-bit." I am telling you why power of two is convenient. It has nothing whatsoever to do with how many bits are used to represent something, except insofar as you need a certain number of bits to get a particular resolution. – Robert Harvey Aug 11 '16 at 17:26
  • 2
    As to your edit, there's *no meaningful distinction* between number of bits per byte and number of values per byte. It's two ways of expressing the same thing. The number of values that a byte can hold follows directly from the number of bits it contains: a byte has 8 bits, and so it can hold values up to `2⁸-1`. – Robert Harvey Aug 11 '16 at 17:34
  • 2
    Logically, it follows that you pick a size for byte that can hold a numerical range that is convenient. ASCII is 7 bits because that provides for 128 different values, enough to encode both cases of the Roman alphabet, numeric characters, punctuation, control characters and several special characters. A byte can hold 7 ASCII bits and one parity bit for error checking, for a total of 8 bits, suitable for a teletype. We've been using that size for a byte ever since. – Robert Harvey Aug 11 '16 at 17:42
  • If that explanation doesn't satisfy your definition of "convenient," then you need to go focus on something else for awhile. – Robert Harvey Aug 11 '16 at 17:43
  • @Andreas 12 would not necessarily be wholly inconvenient in itself, but the main reason it is inconvenient in computing today, is because common hardware architectures (16 bit, 32 bit, 64 bit) are not integer multiples of 12. This relates to register size, instruction set and address space. They are integer multiples of 8. – Bradley Thomas Aug 11 '16 at 17:51
  • @RobertHarvey There is a distinction. Values per byte are always powers of 2 (in a binary system), thus it's convenient simply because it's possible and nothing else is. Bits per byte does not have to be a power of 2. The quote in the question claims that powers of 2 bits per byte are more convenient. So what common attributes make 4, 8, or 16 bits per byte more convenient than 6, 7, 9, or 12 bits per byte? You've given specific reasons for 8 bits per byte, but those same reasons apply apply to pretty much everything (e.g. 9 bits allows for parity bit for UTF-8), not just powers of 2 bits. – 8bittree Aug 11 '16 at 19:18
  • 1
    `So what common attributes make 4, 8, or 16 bits per byte more convenient than 6, 7, 9, or 12 bits per byte?` -- 8 bits is the only byte size that I know of. It is true that 9 bits allows for parity bit for UTF-8, but that's not what the designers of byte went with. They went with 8 bits for reasons that I've already described in detail. All other word sizes in common use today are multiples of 8 bits, ***because a byte is 8 bits.*** End of. – Robert Harvey Aug 11 '16 at 19:32
  • 1
    @RobertHarvey [PDP-1](https://en.wikipedia.org/wiki/PDP-1) used 6 bit bytes, [PDP-10] had bytes between 1-36 bits, [UNIVAC 1100/2200](https://en.wikipedia.org/wiki/UNIVAC_1100/2200_series) had byte sizes as integer fractions of a 36 bit word, [GE-600 series](https://en.wikipedia.org/wiki/GE-600_series) had 6 and 9 bit bytes. – 8bittree Aug 11 '16 at 19:53
  • @8bittree: All of which were discarded in favor of the 8 bit byte, for reasons which I have already described in detail. It's pointless to bring up byte sizes that have been irrelevant for the last five decades, when everything after that is built on top of the 8-bit byte. – Robert Harvey Aug 11 '16 at 20:00
  • @8bittree I think you're missing something right under your nose. You're asking what makes these 8-bit or power-of-two sizes are used, and Robert has shown you many of the historical reasons that contributed to it being accepted as a standard. The standards you've cited are obscure and uncommon (I've never even heard of such devices). 8-bits became popular *because a popular scheme, ASCII, used them* and other standard creators designed their standards around compatibility with something that was popular. – Jeremy Kato Aug 11 '16 at 20:06
  • 1
    @RobertHarvey All your arguments are for 8 bit bytes specifically, not power of 2 bit bytes in general. A correct answer to this question (that does not refute the premise) should provide reasoning for why 16 bit bytes and 4 bit bytes (both powers of 2) are more convenient than, say 12 bit bytes (not a power of 2). – 8bittree Aug 11 '16 at 20:10
  • 2
    @JeremyKato The devices I mentioned are older (60s-80s era, for the most part), which is probably why you aren't familiar with them. ASCII, is actually a 7 bit encoding (parity is not part of the standard). But for the main part of your comment, no, I'm not missing anything. I understand there are reasons why 8 bits specifically is convenient, what you and Robert Harvey are missing is that the question is asking about powers of 2 bits *in general*, not specifically 8 bits. – 8bittree Aug 11 '16 at 20:21
  • 1
    @8bittree (I'm confusing you for OP here and there - my bad for that too) It's convenient to work with and easy to build upon. If I have a 8-bit integer in memory, but I need my integers to be larger, shifting to a 9-bit or 10-bit means everything now either needs to align differently in memory or waste space. If I just double it from 8 to 16, then it's about as easy to manage compared to introducing a new word-size multiple. – Jeremy Kato Aug 11 '16 at 20:35
  • 1
    @RobertHarvey, I don't believe that the answer has anything to do with 7 data bits plus parity. I remember the dial-up days: Parity bits (if we used them at all) were generated by the transmitting UART, and stripped by the receiving UART. There wasn't any need to store the parity bits in memory. I say, "If we used them at all" because once the 8-bit computers hit the market, everyone was pretty much using 8-n-1 to send 8-bit data, and let higher-level protocols (XMODEM, Kermit, ...) deal with transmission errors. – Solomon Slow Aug 11 '16 at 20:36
  • @8bittree: The power of two thing is a red herring. The number of bits in a byte doesn't have anything to do with 8 being a power of two. – Robert Harvey Aug 11 '16 at 20:36
  • 1
    @8bittree Also there's the convenience of having the standard. It's easier to make a program made for a 32-bit computer backwards-compatible with a 64-bit than having the computer emulate a different type of word-size in memory. – Jeremy Kato Aug 11 '16 at 20:36
  • @8bittree, I guess you have never used a computer where 18-bits was a convenient size for small integer values because you could fit exactly two of them into a single machine word. Or, used one where 8-bits was awkward because it used only 2/3rds of a 12-bit word. – Solomon Slow Aug 11 '16 at 20:37
  • 1
    @jameslarge: Then come up with your own theory for why we settled on 8 bits, but that's the byte size that we settled on. Everything after that can be answered with "because that's the size that everyone else uses now, and using any other size now would be really weird and very inconvenient." – Robert Harvey Aug 11 '16 at 20:38
  • 2
    OK, I will: Four bits is the least number that can hold a decimal digit. That's why Intel chose 4-bit as the word size for the 4004 microprocessor---a BCD machine, meant for use in pocket calculators. The 8008 was a more powerful 4004, that cold move two BCD digits with a single instruction. The byte as we know it was born. Then followed the 8080, and that was the spark that ignited the whole personal computer industry. The 8080 inspired the z80 and the 6502, and the 6800, and it was followed by the 8088 and the 8086. It's been 8-bits/byte ever since. – Solomon Slow Aug 11 '16 at 20:55
  • 1
    @jameslarge: Works for me. The salient points being: 1. It has nothing to do with powers of two, other than determining the usable range of the resulting "byte," and 2. It's been 8-bits/byte ever since. – Robert Harvey Aug 11 '16 at 20:58
  • 1
    @JeremyKato We're talking about the number of bits per byte here, not the number of bytes per word. Complaining about wasting space in 9 bit bytes compared to 8 bit bytes is a bit odd, considering [most 8 bit byte systems set the eighth bit to 0 for an ASCII character](https://books.google.com/books?id=bXLDwmIJNkUC&pg=PA13&hl=en#v=onepage&q&f=false), thus wasting space on an 8 bit byte system. But even so, you generally wouldn't switch from storing an 8 bit value to storing a 9 bit value in a single machine, you'd start out with a 9 bit value on a 9 bit byte machine. – 8bittree Aug 11 '16 at 20:59
  • 1
    @RobertHarvey The power of two thing *is the entire point of this question.* It's entirely possible that the answer is that the question's premise is incorrect; that is, having a power of two bits per bytes has no effect on convenience. In that case, write that as an answer (or upvote the existing answers that say that) rather than explaining 8 specifically. – 8bittree Aug 11 '16 at 21:13
  • @RobertHarvey I remember that. I think most of the reasoning you had was about 8 bits specifically, rather than addressing the power-of-two thing. – 8bittree Aug 11 '16 at 21:23
  • 2
    @8bittreeI I started off with "Powers of two are relevant in computing solely because on-off switches are easier to build than 10 digit dials." I could have stopped there and had a perfectly valid, complete and correct answer. The (invalid) premise of the question is that 8 bits was chosen because it was a power of two. Nobody here stipulates to that, not even you. – Robert Harvey Aug 11 '16 at 21:26
0

Not always are word widths a power of two. I have recently been doing some coding in a SHArC DSP that has a 32-bit word width for numbers but not for the opcodes (which are 48-bits wide).

Probably the reason why word widths are a power of two is because of some instructions that test (or set or clear or toggle) a single bit or shift (or rotate) left or right by a specified number of bits. There is a bit field in the opcode to specify the location of the single bit or the number of bits to shift. If the word width is a power of two, this bit field requires log2(word_width) bits to cover the whole word. That is, a word that is 32 bits wide needs a 5-bit field in the opcode for these operations. If the word was 33 bits wide, it would need 6 otherwise it could not cover the whole word, but that would also be the case if the word was 64 bits wide.

Bits in an opcode are extremely valuable, so they don't usually wanna waste them. Then it makes sense to make the word a power of 2 wide.

The reason bytes are 8 bits wide is that it's the smallest power of two that can hold an ASCII character (which is 7 bits).

robert bristow-johnson
  • 1,190
  • 1
  • 8
  • 17
  • This is not my area of expertise but it sounds like a valid reason for power of two byte AND word sizes. I imagine you have to worry less about UB too. For a shift 33-bits would require 6-bit opcode, but only about half of the possible values (0-32) have useful meaning. Would you agree? – Andreas Aug 13 '16 at 07:35
  • the opcode needs to be wider than the bit field needed for the shift count. a byte is nothing other than a word that is 8 bits. the reason why computer hardware **tends** to use word sizes that are 8 or 16 or 32 or 64 bits (it's not always the case, the old DSP56000 had 24-bit words) is because of the reasons i gave above and the reason given by vartec: given a bitmap of packed words and you are given a row and column number of a particular pixel, one has to divide the column number by the word width to know which word to access to test or change the pixel. dividing by a power of 2 is easy. – robert bristow-johnson Aug 13 '16 at 07:44
  • What is a "bitmap of packed words"? Does HighColor suit that description? – Andreas Aug 13 '16 at 08:31
  • @robertbristow-johnson: Total lack of imagination. With 9 bit bytes, we would use 36 bit words, 130 million colors instead of 16 million colours in RGBA, RGB666 instead of RGB555 or the monstrosity RGB565 for low-quality color, and everything would be just fine. And ASCII would include 512 characters up to Latin Extended. – gnasher729 Aug 13 '16 at 14:44
  • @Andreas, no, i meant two "colors". totally white or totally black. – robert bristow-johnson Aug 13 '16 at 16:21
  • again, @gnasher, from what you write you apparently know little of the history or the hardware. i was around back then. i remember text codes before 7-bit ASCII. i remember UARTs and RS-232 with a start bit, 7 data bits, 1 parity bit, and one stop bit. and i remember monotone animated "sprites" on a C-64. – robert bristow-johnson Aug 13 '16 at 16:25
  • @Andreas: PowerPC has always used six bits for shifts, both in 32 and 64 bits, and in 32 bits all 64 values were meaningful. In a logical shift, count >= 32 produced zero obviously. Shift with carry, count = 32 produced zero and put the highest bit into the carry register. count ≥ 33 set result and carry to zero. And rotate instructions did indeed rotate as often as supposed. For 33 bit registers all shift counts from 0 to 63 would be meaningful. – gnasher729 Aug 14 '16 at 18:53
  • @gnasher729 Please don´t insult RGB565. With just a little dithering it looks great ;-) – Andreas Aug 15 '16 at 18:41
  • *"For 33 bit registers all shift counts from 0 to 63 would be meaningful."* no, there are 30 values that are not meaningful. – robert bristow-johnson Aug 15 '16 at 21:22
-1

It is tightly related to address space. By adding one bit more to your address bus, you can address twice as many memory locations. So when you add that extra line, you might as well use it to its full extend.

This leads to a natural progression of 1, 2, 4, 8, 16, 32 et cetera.

On a production technical level it is also easy to repeat the same lithographical pattern. That is, to double it. If you start out with one latch and then double the pattern, you will pass 8, not 6, 10 or 12.

Martin Maat
  • 18,218
  • 3
  • 30
  • 57
  • 1
    How is that related to the number of bits in a byte? Are you seriously claiming that a 32 bit logical AND is easier to implement than 36 or 28 bits? – gnasher729 Aug 12 '16 at 08:32
  • 1
    I did not make any such claim. My suggestion is it stems from earlier designs that were progressively extended in widrh as transistors got cheaper and ICs allowed for smaller circuits. – Martin Maat Aug 12 '16 at 09:46
  • 1
    Interesting theory about production technical level. You might be on to something. Could you extend the paragraph or maybe provide a link explaining the basics? – Andreas Aug 12 '16 at 23:56
  • 1
    It's nonsense. For example, in a graphics card where all kinds of odd bit sizes are needed in various places, everything is done exactly with the required size, and not one bit more. If an h.264 decoder needs 19 bits precision for some operation, then the hardware implements 19 bits and not 20 or 24 or 32. And excuse me, you don't manipulate lithographical patterns. You define the hardware and then run it through some software that creates the layout. – gnasher729 Aug 13 '16 at 07:12
  • Intel's first microprocessor (that was widely accepted as such) was the 4004 and it had a 4-bit architecture. I think they were used for desk calculators. From that came the 8008 which was 8-bit. And another bunch of 8-bits until the first 16-bit incarnation. So I imagine the roots of the microprocessor having something to do with the 8-bit central architecture that is still used. Mass production issues and the need for backward compatibility have always been important, making doubling beneficial. I am making this up but I think it makes sense. https://en.m.wikipedia.org/wiki/Intel_4004 – Martin Maat Aug 13 '16 at 07:24
  • @gnasher GPU manufacturers have their own backyards to work in, backward compatibility is not an issue to them. I am pretty sure compatibility was a driving factor for doubling the width of data busses with each new generation, which makes you stick to the 8-bit base. Production technical it may not have been such an issue, I may have been overspeculating there. – Martin Maat Aug 13 '16 at 07:33
  • 1
    @MartinMaat: You are confusing marketing + standardisation with technological reasons. And technology is what we are discussing. – gnasher729 Aug 13 '16 at 14:37