8

I noticed while scanning the datasheet for a 23K256 SRAM chip that it has 32768 bytes (+262Kbit.)

The manufacturer clearly identifies this chip as 256Kbit.

Reading through the datasheet it clearly says "32768 x 8" which confirms my scan result - but it doesn't say what those extra 6Kbit are for.

Anyone can shed some light on this:

  1. Why 262 while its rated 256 and the actual documented maximum address is 0x7FFF (32767)?
  2. Can I use this extra space? Is it safe?
  3. Can bits on the SRAM (or maybe bytes) get damaged over time?
JRE
  • 67,678
  • 8
  • 104
  • 179
Shlomi Hassid
  • 199
  • 1
  • 6
  • 3
    Note that "kilo" is shortened to "k", not "K". – Dmitry Grigoryev May 12 '22 at 06:38
  • perhaps damaged sure from static or other, but single event upsets can flip bits and sometimes there may be parity or ecc. but I think folks have clearly covered the extra bits question (base 10 not 2), and you dont have extra bits....but if you did it would probably be parity or ecc not spares. – old_timer May 13 '22 at 01:39

4 Answers4

19

256 Kbit is referring to the binary power prefix, sometimes referred to as Kib (kilo-binary bits) which is common for memory specification.

So that means it is referring to 256 * 2^10 bits = 262144 bits = 32768 x 8.

Tom Carpenter
  • 63,168
  • 3
  • 139
  • 196
  • I feel so dumb now :) IDK why I was assuming its kbit as kilo bit. Thank you. – Shlomi Hassid May 11 '22 at 12:35
  • 3
    K (originally Ki) should mean 1024 and k should mean 1000 but they are often misused, e.g. a 10K resistor isn't 10,240 Ohms. – Finbarr May 11 '22 at 13:06
  • 4
    @Finbarr: Can you point me to any evidence to back up the "originally Ki" claim? I never saw that prefix until decades after I bought my first computer with "5K" of RAM. – supercat May 12 '22 at 08:00
  • @supercat It's part of an IEC standard. There's more about it in [this Wikipedia article](https://en.wikipedia.org/wiki/Binary_prefix#kibi) – Finbarr May 12 '22 at 08:18
  • 2
    @supercat "first proposed ... in 1995". I would consider "originally" to be what was used before that, which was definitely not "Ki". – mow May 12 '22 at 08:26
  • 3
    @Finbarr: Since I bought my first "5K" computer in 1980 and the Ki prefix was proposed in 1996, where do you get "originally Ki" from? – supercat May 12 '22 at 08:28
  • 1
    K was, as it says, a de-facto standard but never formally defined. I should have made that clearer in the comment. – Finbarr May 12 '22 at 09:20
  • 3
    k (lowercase) is an SI standard, as are M and G. But that's for SI units like meters and seconds. The SI standard for information is actually the Joule, which is inconveniently big. Bits are not an SI unit. Memory chips (which are the subject of this question) are standardized by JEDEC, not ISO or IEC, and they define the kilobit as 1024 bits. – MSalters May 12 '22 at 11:41
  • @MSalters "The SI standard for information is actually the Joule" -- What? Could you provide a source for this? It makes no sense. By what expression would you convert between bits and joules? They're completely independent units. Their only relation is that you need energy to transfer information, but they're different things. A CD may hold X amount of information, but that doesn't equate to Y amount of energy. – JoL May 13 '22 at 03:18
  • @JoL: The SI quantity that is equivalent to information is entropy, the measure of order or disorder in a system. The related constant is Boltzmann's constant, which is in the order of 10^-23. – MSalters May 13 '22 at 07:10
  • Just to clarify, the binary prefixes (kibi, mebi, etc.) are international standards now. See IEEE Std 1541 and IEC 60027-2. But they are not part of the **official SI standard**. – Elliot Alderson May 13 '22 at 09:43
  • @Finbarr well your 10K resistor _might_ well be 10,240 Ohms (or 9760 Ohms). Given 5% or likely even 10% tolerances.... :) Anyway, the SI way for making letter case be important (eg. 'milli-' vs. 'mega-') is just bad idea. – Matija Nalis May 13 '22 at 15:18
15

There are two kinds of memory: memory where the structure and addressing and continuity clearly demands a number of cells that is a power of two (or at best a small multiple of such), and "bulk" memory where it doesn't. Flash memory is in the middle since the raw cell counts clearly are powers of two but wear management requires setting some of that aside and at least some consumer SSDs routinely store non-binary values in a cell.

The primordial bulk memory are hard disks. Long before prefixes "Ki" and "Mi" were introduced for referring to 1024 and 1048576, respectively (\$2^{10}\$ and \$2^{20}\$, respectively), memory sizes in binary computers were measured in terms of powers of two. This is still the case for RAM: nobody states having a computer having 17GB of RAM even when the exact number are 17179869184 bytes. Flash memory capacities are similarly advertised with powers-of-two based units of raw capacity since "32GB" (actually GiB) as the power-of-two number sounds better than a net bulk size after level wear management of "30GB" or similar.

Hard disk manufacturers were the first to realize that stuff looked better in powers-of-ten based unit multipliers, leading to a long intermediate period where some manufacturers boosted their sizes by diverging from what was in common use (annoying if you try allocating enough sectors to swap out 16GB of RAM).

Now of all current and historical perversions of units, probably the most insulting one is the 1.44MB floppy disk which has an actual size of 1440KiB.

To return to your original question: RAM is consistently specified in terms of unit multipliers based on powers of two as of now, even if the ostensibly more correct "Ki", "Mi", and "Gi" prefixes are not at all consistently employed in marketing and documentation instead of the historic "k", "M" and "G" prefixes.

  • Thank you for the answer - It's good to learn a new thing :) @TomCarpenter answered it as well but I'm accepting yours so you have your first accepted and well deserved answer. – Shlomi Hassid May 11 '22 at 13:48
  • 12
    Stop spreading that lie that hard disk manufacturers started switching to powers-of-ten for marketing purposes. They have _always_ used the standard prefix. The ones confusing the customers are the operating system manufacturers who can't get their units straight. – pipe May 11 '22 at 14:33
  • 4
    The lawsuit that really made this an issue was against flash drive manufacturers. Look at the fine print on the package and you will see that they explicitly define 1GB = 1,000,000,000 bytes. – Elliot Alderson May 11 '22 at 15:33
  • 11
    @pipe Not "always". I remember 40MB Manchester drives that was actually 40MiB – slebetman May 11 '22 at 23:01
  • 3
    @pipe: The so-called "1.44MB" drive holds 2,880 sectors of 512 bytes each. As for flash devices, a "16MB" flash chip would hold 32,768 sectors of 528 bytes each, but flash devices require a certain amount of "slack space" to operate efficiently. While a drive with the right firmware could store 512 bytes of useful data for each sector, efficient and reliable operation requires that drives have a certain amount of "slack space", so a drive with smaller reported capacity would generally be more useful. – supercat May 12 '22 at 07:59
  • 4
    @pipe: The notion that kb=1024 dates back to literally the first commercial DRAM chip, the Intel 1103 which was one kilobit = 1024 bits back in 1970. I would be rather weird for Operating System vendors back then to use a different definition. – MSalters May 12 '22 at 11:35
  • 2
    @pipe power-of-2 units are older than HDDs, because binary computers were invented long before the advent of the first hard disk – phuclv May 12 '22 at 15:49
  • @supercat magnetic drives also need a "slack space" because they need to store error-correction data. That's why 4Kn and SMR drives were invented to improve density by reducing the ratio of ECC data – phuclv May 12 '22 at 15:51
  • @phuclv: On a typical magnetic drive, if one were to write a sector and later writes it again, the new data would be written in the same location as the old, erasing the old data and writing new data in a single operation. On flash media, it's possible to write to previously-blank 528-byte pages independently, but once a page is written, it cannot be rewritten without first erasing all pages in the block containing the sector. To accommodate this, a logical sector rewrite will be accomplished by identifying a blank page, storing the new sector contents there, and invalidating the old sector. – supercat May 12 '22 at 18:06
  • @phuclv: When a flash drive is idle, it can then identify blocks that contain a lot of invalidated pages, copy the still-valid portions of those blocks to new pages, and then erase the old blocks. The more slack space a drive has, the less time it will have to spend moving around pages of data to allow reclamation of storage blocks. – supercat May 12 '22 at 18:08
  • It changed over time, and over place, even back in the day. E.g. my Seagate ST-225 half-size 5.25" MFM disk which had 615 cylinders, 4 heads, 17sectors per track, and 512 byte sectors, had 21411840 bytes in total (actually little less, because it had manufacturer provided list of bad sectors listed on it upon purchase). It was widely advertised (and sold to me) as 21MB HDD, even if today one can find both 20MB and 21MB references for that same model. Also: while SI is popular, it does not hold a trademark on the letter "k" - and that USA at least was not (nor is) fond of SI anyway. – Matija Nalis May 12 '22 at 18:22
  • 2
    @MatijaNalis I never heard of the ST-225 as 21 MB - *always* 20 MB. See for example the reference in https://en.wikipedia.org/wiki/IBM_Personal_Computer_XT – manassehkatz-Moving 2 Codidact May 12 '22 at 18:52
  • @MatijaNalis: The quoted capacity would be 20,910KBytes. A "megabyte" of magnetic storage capacity was generally neither 1,000,000 bytes (1953.125 sectors of 512 bytes each), nor 1,048,576 bytes (2048 sectors), but rather 1,024,000 bytes (2,000 sectors). By that definition, an ST-225 with no bad tracks would be close enough to 21 "megs" that some marketers might call it "21 megs", to distinguish perfect units from those with bad tracks. – supercat May 12 '22 at 19:50
  • @slebetman I'd be very interested in some documentation about that 40MiB drive, because not only have I checked my old archive and browsed through numerous old magazines and I never could find a single example of a drive that advertised anything other than powers-of-ten. – pipe May 13 '22 at 03:04
  • @supercat Your so-called 1.44MB disk holds 2000000 bytes (and not even 2097152!), whatever your operating system does with those bytes are not really the fault of the disk manufacturer. – pipe May 13 '22 at 03:10
  • @pipe I remember older drives had fixed 512 byte sectors. Before LBA you didn't really have X amount of MB on a disk, rather you had number of sectors and number of cylinders (platters). It used to be that when you format a drive you had to specify how many sectors and how many cylinders the drive had. Since sectors are fixed powers of 2 it was impossible to buy a drive with powers of 10 storage. It was around the time of Windows98 that I first saw powers of 10 disk drive sizes – slebetman May 13 '22 at 03:17
  • @slebetman Sure, but the number of cylinders and sectors etc was rarely a power of two and then it stops being a useful way of presenting the size. – pipe May 13 '22 at 03:40
  • @pipe actually, so-called 1.44MB floppy disks (as well as their older 5.25" 1.2MB siblings), that was just what standard DOS "format" command formatted them for (in days when "formatting" was not a synonym for "erasing"). The access to hardware was much more low-level then, and there were 3rd party tools (like [800k](https://vetusware.com/download/800%20II%20Diskette%20BIOS%20Enhancer%201.40/?id=3342)) which allowed floppies to be formatted to bigger capacities (e.g. via using more sectors on outer tracks or more densely packed sectors) and faster operations (by skewing sector positions). – Matija Nalis May 13 '22 at 14:47
  • @manassehkatz-Moving2Codidact searching the web for `ST-225 21MB` should show you hundreds of thousands of matches (same order of magnitude as slightly more popular `ST-225 20MB` search). So yeah, both were (and are) present, indicating that confusion and/or marketing were as alive and kicking as today. – Matija Nalis May 13 '22 at 15:27
  • 1
    @pipe: A high-density floppy drive is specified to be capable of resolving 100,000 flux transitions per track, even on the innermost track. If one used a drive and controller that were designed to maximize storage capacity at the expense of complexity, and only needed to write a disk once, in order, from start to finish, a typical high-density floppy could probably be pushed beyond 3,000,000 bytes without much difficulty. – supercat May 13 '22 at 17:52
  • 1
    @pipe: If a disk will need to be safely writable by a drive whose rotational speed is 5% above nominal, it will be necessary that sectors written by a drive that's operating at precise nominal speed allocate more than 105% as much space for each sector as would be needed if the disk would only be written with drives running at nominal speed. On the flip side, if a disk will be single-pass mastered by a drive that's running at precisely 95% of nominal speed, it could be written about 10% more densely while still being readable with a standard drive and controller. – supercat May 13 '22 at 18:03
  • 1
    @pipe: If one uses a controller that can more finely control the timing of flux transitions, and which can put more flux transitions on outer tracks, capacity can be boosted further, and if one uses a drive with finer control of head movement and won't need to be able to write a track repeatedly without disturbing tracks on either side, one could pack more than 80 tracks/disk. – supercat May 13 '22 at 18:05
4

It's the difference between powers of 2 and powers of 10.

RAM chips are always addressed by an integer number of address lines, which makes it natural for them to have capacities that are a power of 2. Early on, someone noticed that 210 (1024) was conveniently close to 1000, so they started using K to represent 1024 instead of the traditional 1000.

As capacities go up, the difference between binary powers and decimal powers gets larger. 1024 vs. 1000 is a 2.4% difference. 1099511627776 (240) vs. 1000000000000 (1012) is a 10% difference. Since RAMs are still constrained to be powers of 2 by the addressing, they're still specified using binary powers. Hard disk makers switched to decimal powers early on, but confusingly the operating systems (such as Windows) report capacities in binary powers leading to much confusion.

Mark Ransom
  • 359
  • 1
  • 7
3

Tom already addressed the first part of your question - regarding the wear and tear you refer in point 3, this is not a failure mode that I am aware of for SRAM (or DRAM for that matter).

You are probably thinking of flash memory, where each write cycle wears the floating gate structure ever so slightly - but in SRAM the memory cell is basically a flop and is not subject to such damage.

Of course it is still silicon, so I guess that over long periods of time the chip can get damaged, for electromigration as an example, but not for reason specific for it being an SRAM chip.

Vladimir Cravero
  • 16,007
  • 2
  • 38
  • 71
  • Interesting... So is it a good assumption to say that with SRAM we can assume all the address space is "safe" to use? – Shlomi Hassid May 11 '22 at 12:39
  • @ShlomiHassid SRAM does not have a failure mechanism from repeated use. No type of RAM does, as it would defeat the purpose of RAM as something a computer can use as working memory, writing to it every few clock cycles if it wants. If it had the failure mode of EEPROM/flash, normal computer operation would wear out the memory in seconds. – Hearth May 11 '22 at 12:54
  • Thank you for the clarification. – Shlomi Hassid May 11 '22 at 13:05
  • 2
    And to protect against other failure modes of D- or SRAM, you can add extra bits for ECC. Usually one extra bit for every eight. Memory modules would still be labeled with their net capacity though, e.g. 1Gbitx72 and the non-ECC 1GBitx64 are both labeled 8GByte even though they technically have 9. – mow May 12 '22 at 08:37
  • @Hearth: FRAM has a limited lifespan, although it's much higher than flash. – MSalters May 12 '22 at 12:01
  • @MSalters Okay, fair, but it's unlikely you'd be using FRAM for very much--it's too expensive. I was thinking of SRAM and DRAM. – Hearth May 12 '22 at 13:50
  • @Hearth: DRAM does have a failure mechanism for multiple reads of a row that occur between refresh operations on adjacent rows. This was the mechanism behind "row hammer" attacks, which would allow non-privileged code to perform sequences of memory accesses that would have a non-trivial likelihood of causing arbitrary memory corruption, which might occasionally allow for privilege escalation. – supercat May 12 '22 at 18:25
  • 1
    @supercat: But that's really a subcase of the known failure mode of not refreshing often enough, combined with a deficiency in the memory controller that it doesn't map an operation to the correct full set of rows that are degraded and need to refresh. – Ben Voigt May 12 '22 at 19:09
  • @BenVoigt: By my understanding, the DRAM devices would malfunction even when operated in full accordance with the documented specifications. Later specifications started including requirements for adjacent-row refresh, but I don't think such requirements were routinely specified until after rowhammer-based attacks were discovered. – supercat May 12 '22 at 19:45
  • DRAM and SRAM Typically are fabricated with internal redundancy, eg spare rows and columns, and bad sets of bits are mapped out and good spare sets of bits are mapped. Typically this is done during manufacturing test, and the mapping is fused. – Krazy Glew May 13 '22 at 15:55