4

What gives some SD/MMC cards a much higher speed class rating than other SD/MMC cards?

Why are some solid-state "disks" (SSDs) much faster than other SSDs?

I'm hoping that I can take at least some of ideas for storing data faster, and apply those ideas to a data logger design I'm working on that stores (often bursty) data to a few flash chips.

(Ideas that involve re-designing flash memory cells and fabricating new flash chips to work faster are interesting, but are not as useful to me since, alas, I don't own a fab).

davidcary
  • 17,426
  • 11
  • 66
  • 115
  • 1
    related: [MicroSD with ECC](http://electronics.stackexchange.com/questions/35403/microsd-with-ecc). – davidcary Jul 10 '12 at 19:40
  • I think it's a combination of the memory technology itself and how many bits are written in parallel. The second is something you can do on your own to some extent. In other words, more devices operating in parallel increases average write speed per byte. – Olin Lathrop Jul 10 '12 at 19:46

2 Answers2

3

I'm pretty sure Olin has it nailed in his comment there: The idea is that you can either increase the speed by accessing more bits in parallel or decrease the pin count and access bits serially, but these things happen internally. The way you get a class 10 device is that the internal controller takes in the write command and accesses enough flash cells in parallel so that erasing and re-writing them can happen at 10MB/s. The problem here is that in general this makes stacking flash cells more expensive because you need more lines between each layer, which is why micro-SD cards are so much more expensive in higher classes.

The other way you can increase speeds is through pre-erasing cells. The problem is that you can only change individual bits one direction (I don't remember whether this is high-to-low or low-to-high), and the other direction requires you to wipe the whole cell. So in general when you try and write 512 bytes, the SD card will erase the block you're trying to write to and then write the new data. This slows down the transaction, but if you instead marked that cell for erasing later, and then wrote to a different cell which had been pre-erased, it would happen much faster. Then the controlling IC can then go through and pre-erase the marked cells when it's idle.

Aaaaand I wrote this whole blob like you were talking to SD cards, but you said you're writing to flash chips. Whoops! The advice for pre-erasing cells should still be valid if you have that level of control over the flash chips. Anyone may feel free to correct me if I'm wrong, and I hope that helps!


Edit: Looking at the tags it looks like you might actually be asking about SD cards, in which case the only thing you could really do would be external parallelization. Essentially you'd be implementing a RAID 0 where the first byte goes to the first sd card, the second byte to the second sd card, etc. You could theoretically increase your throughput N times where N is the number of cards, so long as the data came at a rate where you could expect that the first card would be finished writing by the time you finished sending the write command to card N.

The downside to this is that you would need N functioning SD card interfaces, and would be kind of a pain to get data on and off it.

Kit Scuzz
  • 3,279
  • 21
  • 19
  • I'm actually asking about directly controlling flash chips. I'd be happy to learn some *other* way of storing to flash and reading from flash really fast, even if no SD cards ever use that method. +1 for a couple of speed-up ideas. – davidcary Jul 11 '12 at 12:14
  • Yeah, I think the most important bits are going to be SSD-style methods where you remap cells and pre-erase (like supercat said for remapping). Then just like I was saying for SD cards, the more you write in parallel the faster you'll be. The only other thing is if the flash chip allows you to write a whole block simultaneously you'll want to try and do that too (i.e. if it buffers and then waits for a commit command do that). Good luck! – Kit Scuzz Jul 11 '12 at 20:18
3

In addition to the fact that some flash devices are capable of writing more bits in parallel, another factor affecting speed is the way in which garbage-collection is performed. One of the biggest sources of slowdown on flash drives stems from the fact that most flash devices do not allow 512-byte pages to be erased and rewritten; instead, they require that erase operations operate on much larger areas (e.g. 32KB or more). If a device is asked to rewrite block 23, it will find an empty page, write "I am block 23" along with the new data, then find the old block 23 and mark it as invalid. If the number of empty pages gets too low, the device will check whether there's any erasable block which don't hold any valid pages. If not, it will find one which has very few valid pages, and move each page to a blank page in some other block (invalidating the old ones as it goes along). Once a block has been found which doesn't have any valid pages, that block can be erased, and all its pages added back to the pool of blank ones.

Many schemes can be used to keep track of how pages are mapped and determine which blocks should be recycled when. It's possible to design fairly simple schemes that can be implemented on a small micro with limited RAM, but performance may not be great (e.g. it may have to repeatedly read through the flash to identify blocks for garbage collection, and may place data blocks without regard for whether they're likely to become "obsolete" soon). Conversely, if the controller has a generous amount of RAM available, it may be able to do a better job of identifying which blocks should be garbage-collected when, and may also be able store blocks of data with other blocks that will have similar useful lifetimes.

Incidentally, I consider it unfortunate that solid state drives have not standardized on some sort of file system at the controller level (meaning that rather than asking for block #1951331825, software would ask for blocks 4-8 of file #1934129). A flash drive which knew how information was stored in files could make much better decisions about which data should be placed together than one which simply has to deal with seemingly-independent writes to various sectors, and could also do a more effective job of ensuring data integrity under adverse conditions.

supercat
  • 45,939
  • 2
  • 84
  • 143