4

I have never seen any flash chips with capacity not confined to the strict (i.e. not like in hard drives) power of 2. I wonder what prevents manufacturers from creating such chips: is it some engineering reason, or just the marketing, or anything other?

I've came to this thought after examining some flash drives: the NAND flash inside is, for example, 8 GiB, and the drive itself has a size of 8 GB. Looks like the difference is used to compensate errors which appear quite often in MLC NANDs by replacing bad blocks with spare ones.

For example, plugging an SD card labeled as "512 MB" into my Linux box produces the following message:

sd 12:0:0:0: [sdb] 996352 512-byte logical blocks: (510 MB/486 MiB)

It is not formatting nor partitioning overhead: the full size of the device itself is much less than 512 MiB.

Also, I'm aware of the out-of-band (OOB) space present in all (MLC) NAND flashes. I don't consider it as "extra" or something because:

  1. It is almost always used to store the ECC codes and not the actual data; hence, the amount of information is same with or without OOB.
  2. The count of pages and blocks in entire flash is still power of 2, and moreover, amount of bytes in data and OOB areas of page on their own is power of 2, too. While the latter may be very well justified by minimizing overhead of addressing, the former is puzzling for me. We already have RAS and CAS, with one's address space bigger than other, and the matrix is already asymmetrical — why do it exactly in the power of 2?

So, what prevents a vendor from adding a bit more rows or columns? I'm mostly interested in NAND flashes, as they have an unified 8-bit address/data bus, and the number of bits in the address does not really matter (unlike for NOR ones).

Catherine
  • 637
  • 7
  • 16

4 Answers4

3

The amount of silicon required to address less than 2n but more than 2n-1 pages/rows/columns/cells/etc is the same as the amount of silicon to address exactly 2n. So the silicon usage efficiency is best at 2n.

Further, if you intend to put several of them together in a parallel access scheme, you will end up with gaps if each chip doesn't address 2n.

There's little advantage to supporting sizes between 2n and 2n-1.

If you need to create a device with memory in the range 2n and 2n-1 then you will generally find that buying the 2n part is more cost effective than buying the 2n-1 part and a smaller part. Manufacturers have the same issue when producing silicon dies. Yes, they could make one, but it wouldn't increase their bottom line.

Adam Davis
  • 20,339
  • 7
  • 59
  • 95
  • 2
    Actually, the flash chips used in silicon drives generally have a size which is a power-of-two multiple of 528. While silicon efficiency might be better with a "straight" power of two, drives often need to store blocks which combine 512 bytes of data with a small quantity of bookkeeping information. If memory chips used logical pages of 512 bytes each rather than 528, drives would have to store a page worth of bookkeeping information beside each page of data--a massive waste. – supercat Oct 02 '14 at 19:03
  • 1
    @supercat Yes, some newer chips contain extra space. Many of them, however, do this because they simply have too many bits due to their 3 bit per cell MLC flash. Some other topologies, such as 3D stacking, also result in odd factors. In these cases the best efficiency comes down again to addressing according to the flash topology. Further, in the rush to increase capacity some reliability is exchanged, but fixed with error detection. So while a unit may have 192 three bit cells, it only exposes 176 three bit cells, and corrects a few bit errors per block. – Adam Davis Oct 02 '14 at 20:15
  • 1
    The convention of flash chips having 528-byte blocks goes back into the 1990s; it's not just "some newer chips". – supercat Oct 02 '14 at 20:27
  • @supercat 528 byte blocks are designed for 4 bit ECC. Yes, ECC has been around for a long, long time, and thus flash manufacturers do support it. They aren't uncommon, but they are really only used when robust error correction and detection is required. They are typically more expensive than the 512 byte block size parts, though, which again points to cost efficiency of silicon being the reason most flash favors power of two. However with newer 3 bit per cell MLC flash the cost difference isn't as great, and the need for error correction is greater. But yes, not all flash is power of two. – Adam Davis Oct 02 '14 at 20:36
  • 1
    Although manufacturers recommended using *part* of the space for ECC, the primary purpose of the space was to facilitate block remapping. When the PC writes a logical sector, the page holding data for that sector will not be immediately erased. Instead, the SmartMedia driver will write the page data to a new page, along with its logical sector number and related information. – supercat Oct 02 '14 at 20:42
  • 1
    @supercat Typically block remapping and wear leveling techniques apply to both 512 and 528 block flash, so the bookkeeping occurs regardless of the size. Further, NAND Flash memory requires ECC to ensure data integrity. After applying 4 bit ECC, yes, you still have 516 bytes, which means you have a handy 4 byte value for block identification or remapping as needed. **Honestly I'm not sure what point you're trying to prove here** - if it was cost effective to have powers of ten flash memory, manufacturers would do it. They don't because it's more cost effective to use powers of two. – Adam Davis Oct 02 '14 at 20:50
  • The OP clearly says, *"Also, I'm aware of the out-of-band (OOB) space present in all (MLC) NAND flashes. I don't consider it as "extra" or something"* so please help me understand what is wrong with my answer, or make an edit yourself. As it is these comments appear to be going tangent to the question and answer. – Adam Davis Oct 02 '14 at 20:52
  • @AdamDavis Hi, just a curious passerby here, but could you elaborate on why "The amount of silicon required to address less than 2n but more than 2n-1 pages/rows/columns/cells/etc is the same as the amount of silicon to address exactly 2n"? Is this for any theoretical positive integer n, or is it just true for real-world situations? – kumowoon1025 Dec 16 '17 at 01:47
  • @AdamDavis I tried to rationalize it by thinking you need 2ⁿ + ceil(log₂n), but then thought you only need that many bits for addressing, then got stuck thinking there's no way to know without specifics on the controller implementation... I know this is an old question but could you give me some pointers on the theory? – kumowoon1025 Dec 16 '17 at 01:53
  • @user3052786 If you want to address 256 units of memory, you need 8 bits, and enough digital logic to turn the 8 bits into the necessary 256 lines to activate each unit. (unit may be row, byte, etc). It's a binary decoder. If you only want 200 units, the amount of silicon is essentially the same. You might be able to skimp on output buffers if you require them, but that's implementation dependent and more closely associated with the memory cells than the addressing. It's not a lot of space, but it's still a matter of efficiency. – Adam Davis Dec 16 '17 at 03:32
  • @user3052786 You could go further and discuss the difficulty of stacking memory in the system as well. Some designs only need on memory chip, but most systems require multiple chips, and would require additional support to seamlessly manage a bunch of 200 unit memories vs memories that filled a complete binary address space. – Adam Davis Dec 16 '17 at 03:34
2

I've actually seen lots of devices that are not only not powers of two, but not even non-power-of-two multiples of power-of-two blocks.

Sometimes - as with those packaged in USB or SD devices it's easy to assume that the controller you access the memory through is reserving space to map out bad blocks, etc.

But I've also seen SPI-interface flashes where the actual native block size was not a power of two, but actually a bit larger - for example, the AT45DB161D has 4096 pages that you can set as being either 512 or 528 bytes long.

Chris Stratton
  • 33,282
  • 3
  • 43
  • 89
  • Hmm. Can you provide a few examples? I'm curious to look at the datasheet... – Catherine Aug 12 '11 at 21:45
  • I believe I did provide an example. – Chris Stratton Aug 12 '11 at 22:51
  • Oh, I'm very sorry, my comment should have been much better (3AM, you know...). The chip you've shown is pretty standard (compared to what I've seen): it has 4096 pages (which is indeed a power of two) of 512 bytes (which is too) of regular data and, optionally, 16 bytes (again, power of two) of out-of-band data for storing the ECC codes. Most, if not all, NAND chips provide such OOB space due to bit-flips; so, I don't see how this chip is non-standard. If it had, for example, 3072 pages, then it would be the one I'm searching for. – Catherine Aug 12 '11 at 23:07
  • I didn't say it's not standard, but it is not a power of two. If you look at how access to it actually works, a decision to designate those extra 16 bits for "out of band" usage would be your decision, not something forced on you by the architecture of the device. – Chris Stratton Aug 12 '11 at 23:14
  • I mean that the regular NAND flashes also have page size as not the power of two, and the extra bytes are always used to store ECC codes, so the actual data capacity is just the `(power of two) effective page size` × `(power of two) pages` × `(power of two) blocks`. A Samsung flash (K9G8G08U0A) I've worked with, for example, had pages of 2048 bytes + 64 bytes of OOB, blocks of 256 KiB consisting of 128 such pages, and 4096 of the blocks, totalling in 1GiB of actual space, and, accordingly, 32MiB of supplementary OOB space. That's how pretty much all NANDs are done. – Catherine Aug 12 '11 at 23:14
  • Specifically for this one, it looks like this is an SLC flash (given the size and 100k cycles promised by Atmel), and they are much more robust than MLC ones, so for some applications the OOB space may be redundant. But it still falls into the category I described in previous message. (Also, they're 16 extra bytes, not bits). – Catherine Aug 12 '11 at 23:18
  • It is not "OOB space" unless you, the user, decide that is how you want to use it. Don't imagine that there are 512 bytes in one area and 16 lonely ones off on the side - that's not the case. Instead, it's an example of a device with a non-power-of-two physical layout. – Chris Stratton Aug 12 '11 at 23:23
  • I don't. I understand that it is effectively a single continuous space, which I may use in this case (or just as well in the case of that MLC NAND flash) for anything I would want, splitting it as 520+8 or even dump ECC at all. But it is designed just as any other MLC flash and has anything on higher level (sectors and blocks) designed as a power of two collection, a property that "regular" flashes do have, too. And the reason of my question was wondering if there is (or why there isn't) a (single) chip allowing to make a 48G flash drive, because 32G is too small, and 64G is too expensive. – Catherine Aug 12 '11 at 23:36
  • Then why not ask the 48G question and avoid the runaround? – Chris Stratton Aug 13 '11 at 00:16
  • That's only an example, really. I'm looking for an underlying reason, and I thought that asking a question about the fundamental concept will be more effective than about a particular case. I may be wrong, through. – Catherine Aug 13 '11 at 10:39
1

The word is deceit, and it also happens with hard disks. If you buy a 1TB hard disk and it appears to hold 1 000 000 MB, technically you're not swindled, even when you actually did mean and expected 1 048 576 MiB. Giving you more than the absolute minimum would cost them a few cents extra.

Data for the WD Caviar Blue drive, but other manufacturers will handle more or less the same numbers:
1 TB: 1 953 525 169 sectors (= 71 * 89 * 173 * 1787) = 0.91 TiB
640 GB: 1 250 263 728 sectors (= 2\$^4\$ * 3\$^3\$ * 7 * 31 * 13337) = 596 GiB
500 GB: 976 773 168 sectors (= 2\$^4\$ * 3\$^3\$ * 7 * 67 * 1607) = 466 GiB

The factorization of the number of sectors shows that there's no logic in it, the numbers are chosen to give a round number in GB, not GiB.


The advantage of having \$2^N\$ bytes is that you get the most possible storage for a given address width of \$N\$ bits. Most parallel memory provides \$2^N\$ bytes because of this; extra bits in the address width are expensive as you need larger packages.

stevenvh
  • 145,145
  • 21
  • 455
  • 667
  • 1
    Sorry, but this is simply wrong. First, you can check any datasheet on a typical NAND Flash device (I use K9G8G08U0A as an example), and the sizes of blocks will be clearly stated in binary units. So, the capacity of flash chip itself is indeed 2n. But the capacity of mass storage device is less than that of the flash because of spare blocks which will be used in place of bad blocks, which are present on any new MLC NAND device and are appearing through its life. – Catherine Aug 13 '11 at 10:33
  • Second, the NAND Flash have a standardized interface where interaction with external controller is done through a 8-bit bidirectional bus. Row and column addresses already exceed the bus width, and several transfer cycles are used to select a block; they do not fill all 16 bits as well, so there is already some extra space. (This interface is not strictly serial, as there are 8 parallel lines of data, but it's not parallel, too, as you cannot set up all the address on just that 8 pins.) – Catherine Aug 13 '11 at 10:38
  • @whitequark - From the first Fash card I could find (1GB): 1024 655 360 bytes total capacity = 1000 640 kbytes = 977 MB. Coincidence that this is just above 1000 000? I don't think so. – stevenvh Aug 13 '11 at 11:28
  • It may or may not be a coincidence; but in the question, I ask about flash chips, not flash cards. The capacity of flash cards is just a consequence. – Catherine Aug 13 '11 at 11:43
  • +1 not for your first section, but for the point on address width. – Kevin Vermeer Aug 13 '11 at 17:14
  • @Kevin, how does it matter for NAND flashes? They all have 8-bit (or 16-bit, in case of extended version) bidirectional unified I/O bus, independently of the capacity. – Catherine Aug 13 '11 at 20:27
0

There is no practical reason that couldn't be done.

The difference between 8 GiB and 8 GB can also be formatting overhead. The error-correction logic doesn't count in either number.

Brian Carlton
  • 13,252
  • 5
  • 43
  • 64
  • 1
    I've updated the question with an example. The error-correction logic in form of ECC codes does not, of course, as it uses OOB area, but error correction which replaces bad blocks with spare ones does count. – Catherine Aug 12 '11 at 19:33