4

I'm currently reading about RAM modules. Bigger DRAM module are laid out in a matrix. When retrieving data you first retrieve the row and then the column. One of the benefits of the matrix lay-out is that the row and column addresses are multiplexed on the same pins, so you only need 2^(n/2) pins for the addresses. By using a matrix the pin count can be reduced quite easily, so why isn't this done with SRAM modules?

Henk
  • 499
  • 2
  • 7
  • 11

2 Answers2

8

First, don't confuse the interface (multiplexing the address lines) with the internal arrangement (laying things out in a matrix).

All RAM chips use a matrix a the the lowest (single bit cell) level, most use a less regular layout at the higher level (for instance two separate banks).

Multiplexing the address lines reduces the number of pins, but it has a cost: supplying the two halves of the address with the associated clock signals takes time. When this time can be matched with things that must be done inside the RAM chip anyway this is OK, but when it would slow the chip down this is a big performance problem.

SRAM can be 'directly addressed': the N bit address is decoded into 2^N select signals, that each activate a word of bit cells. (AFAIK this is not how modern SRAM really works, but it was true some time ago).

DRAM is addressed by row/column, because it eases the refreshing that is required: when you address one row (or was it a column?) all cells in that row are read and written back. Next you can (but don't have to) select one column and you get the values of the selected bit cell(s). So the row/column distinction maps nicely to the refresh hardware and process that must be present anyway.

Wouter van Ooijen
  • 48,407
  • 1
  • 63
  • 136
  • If I recall, on a DRAM chip, asserting /RAS causes the row to be read, and releasing it causes that row to be written back. One may if desired read or write one or more bits within the row before that happens, though on many chips there is a limit to how long /RAS may be asserted. – supercat Jul 10 '13 at 18:43
1

The architecture is different. Dynamic devices store a bit per device, so you see 8 or 16 chips in a line (or on a SIMM) to store a byte or word. (Plus one or two more, if parity bits are needed.) Static devices store a full byte, so you usually see one or two of them on a board. It's only if the data bus is wide (16 or 32 bits) that you are forced to have more than one SRAM chip.

The Apple ][ was a classic design. It had both types of devices, and played on their strengths. Dynamic memory was expensive, so you could choose how many rows you wanted to populate in the 3x8 array of 16Kbit devices. The DRAM refresh cycle was piggybacked onto the video display update -- each time the video frame was read out, it also refreshed all the DRAM devices. Then there were six 2Kbyte ROM/EPROM sockets that stored the resident part of the bootstrap and OS at the top of the memory map. EPROMs and SRAMs have very similar pinouts. Embedded designs can take advantage of this to let you design/program with a SRAM, and ship an EPROM in the same socket for the finished product.

Ron
  • 289
  • 1
  • 4