I'm interfacing with SDRAM on an FPGA and full page bursts are a godsend for streaming data. It's seems to be much, much more handy then a fixed burst size. I know it was removed when we moved to DDR. Does anyone know why the most useful burst mode was removed?
Asked
Active
Viewed 1,648 times
13
-
wild guess: it's useful for you, but not for other people – user253751 May 30 '22 at 13:23
-
2@user253751 Perhaps. But full page burst can emulate any other burst size so it seems like a general feature was removed in favor of specialized features. That would seem the opposite of what you would do in a newer generation technology. Unless there was a good reason for it. – John Smith May 30 '22 at 13:41
-
1DDR SDRAM is usually constructed with multiple internal banks (typically four of them). These banks operate fairly independently, and it is in fact possible to overlap accesses to the four banks in sequence such that you can stream data continuously. – Dave Tweed May 30 '22 at 13:43
-
5Yes, there was a good reason: burst aborts make the protocol a lot more complicated, while repeated commands every 8 cycles are easy to implement and more flexible. – Simon Richter May 30 '22 at 13:45
-
2The DDR burst sizes are the same as the cache-line size of CPUs (32 or 64 bytes); this is not a coincidence, and is presumably what CPU memory controllers want since they track requests by cache line. – Peter Cordes May 31 '22 at 03:45
1 Answers
19
There is something better now: independent banks.
You can activate a row in bank 1 while accessing bank 0, which allows you to issue a write command to bank 1 exactly 8 cycles after sending a write command to bank 0 later, and DQS toggling from the first command serves as the preamble for DQS on the second. The same applies to reads.
If you issue commands at the exact right times (which can be easily hardcoded with a counter), you can set up a continuous stream for the entire chip.

Simon Richter
- 12,031
- 1
- 23
- 49