6

I have used Verilog to develop RTL representations of synthesizable digital circuits, and have recently been using Verilator to run simulations of these. My understanding of Verilog semantics, therefore, is based on how things work under those simulations (Verilator attempts to match a synthesized representation).

However, this question on StackOverflow (see also this linked document) has drawn my attention to the idea that Verilog can legitimately be run in two different modes, these being pre-synthesis and post-synthesis, and that the meaning of the language (and the results of simulation) are not the same between the two modes. Of particular interest to Q&A sites like this one, discussions about Verilog's behavior might become quite confused by lack of an explicitly specified distinction between the two modes.

I am aware that Verilog can be used to describe algorithms in ways that cannot be readily synthesized by automated tools, but assuming we have a design that is targeted for synthesis, what would be the purpose of running a pre-synthesis simulation?

Note: What actually started me wondering about this was this response to another question that I posted. I was interested in post-synthesis semantics, but the answer seems to be addressing pre-synthesis semantics, although the distinction is not explicitly indicated in the question or the answer.


I understand that the physics and timing of real hardware is not addressed in pre-synthesis simulations, but this is not what I am asking about.

An example of language semantics differences (as indicated in the linked document):

  • pre-synthesis: the order of blocking assignments can matter
  • post-synthesis: the order of blocking assignments does not matter

The point being, pre-synthesis simulation may fail to match hardware due to semantic rules -- even if there are no problems in the timing. So running a pre-synthesis simulation may give the wrong result (one that doesn't match hardware) unless you have carefully constructed the code such that it will work the same for both modes of the language. So the question is basically, why run a simulation that is known to have wrong semantics.

Further note: Verilator is completely idealized, and has no notion of any specific hardware, so perhaps it is not accurate to call that post-synthesis. However, it uses the semantic rules of post-synthesis, and is purportedly very fast, so to me this establishes a baseline of what first-order simulation should probably do.

Brent Bradburn
  • 355
  • 3
  • 10
  • Sorry to be a shill (albeit completely unaffiliated), but I think no one who has answered this question actually understood it (although the answers are still useful to me). Perhaps you should give Verilator a look -- it has done wonders for my organization. And in case you missed it in the docs, **Verilator is typically faster running with post-synthesis semantics than commercial simulators running in pre-synthesis mode**. – Brent Bradburn Apr 02 '16 at 00:45

4 Answers4

9

The main purpose of pre-synthesis simulation is to verify the logical functionality of your design, without worrying about the specific timing details of a particular implementation.

This saves a lot of time in the functional debugging cycle, since you don't need to wait for the synthesis process to complete before simulating again after making a change.

Once you have the design working correctly with "nominal" or "unit" delays, you can then run the synthesis tools and verify that it continues to work with actual timing delays. Fixing the issues at this stage and dealing with long critical paths usually takes fewer iterations, which is good since each iteration takes longer.

But in fact, many designs are never taken through post-synthesis simulation. It is often sufficient to use pre-synthesis simulation to verify the functionality, and the static timing analysis tools to verify the post-synthesis timing.


Regarding your subsequent edit, if the order of blocking statements ever matters, then you're probably not doing something right.

One of the key differences between simulation and reality that I frequently trip over is the specification of sensitivity lists for processes. The simulator will always take the sensitivity list literally, because this is important with regard to making the simulator run efficiently. However, the synthesis tools and the actual hardware don't really care about sensitivity lists (although some synthesis tools will give you a warning if there seems to be a problem) — if a signal changes, those changes will propagate through the logic regardless of what the sensitivity list might say.

Dave Tweed
  • 168,369
  • 17
  • 228
  • 393
  • "I frequently trip over" -- that is exactly the point of my question: Why run a simulation that trips you up? I guess the answer is that the only other option takes much more time. It seems like there should be a happy medium... – Brent Bradburn Jan 11 '15 at 02:51
  • There is no "happy medium" - if you try to debug functional behavior on post-synthesis Verilog code (so-called Verilog netlist), you're dead in the waters. It is similar to trying and debugging electronic circuit functional failure using Maxwell's equations - it is perfectly fine in theory, but can rarely be done in practice (except for the simplest circuits). Dave's answer is correct - Verilog code is about functionality, and the fastest debug cycle is achieved using pre-synth simulation. This answers your question. If you're interested in post-synth simulation - ask another question. – Vasiliy Jan 11 '15 at 09:26
4

The meaning of the language is the same, the difference is indeed whether the design needs to be synthesizable.

Pre-synthesis simulation assumes ideal hardware, with constant propagation delays, so it will not accurately reflect a component being split into two parts with a long interconnect between them. On the other hand, that simulation is fairly cheap, and there is no need for a full compilation beforehand.

This can be used for functional tests of individual units that should be synthesizable across many different devices (i.e. most reusable components).

Post-synthesis simulation uses the hardware model for the given temperature, core voltage, speed grade etc., and in order to give a meaningful result, it needs to be aware of the placement. This is a great debugging aid, especially when writing timing constraints.

Simon Richter
  • 12,031
  • 1
  • 23
  • 49
2

I've done a few verilog designs for FPGA using Xilinx ISE 14.7 with many hierarchical nested layers of modules, and I find debugging is much more difficult in post-synthesis.

Pre-synthesis simulation (also called behavioral simulation) retains the hierarchical net names, so it's relatively easy to drill down to a specific lower level module to inspect its behavior.

However post-synthesis, the entire design becomes one big flat mass of registers and combinational logic. Assuming that the interesting nets have not been optimized away, they may have been inverted or otherwise renamed.

Pre-synthesis (behavioral) simulation gives an idealized representation, that closely follows the structure described by the code. If the design doesn't work correctly at this stage, it won't work post-simulation either.

Post-synthesis simulation gives the best representation of what the hardware will actually do, but it's relatively more time and effort to get useful results.

In many cases (for FPGA based designs) it's cheaper and simpler to go directly from successful synthesis to testing on FPGA hardware. If the design is verified on the FPGA then you're done. The post-synthesis simulation only becomes necessary as a diagnostic if there are problems that can't be understood from observing the external signals.

MarkU
  • 14,413
  • 1
  • 34
  • 53
1

I'm not sure what you mean by running Verilog in different modes. I'm assuming you mean simulating the RTL and the synthesised code and that you are talking about functional simulation (that is, timing other than cycle-level is ignored).

The purpose of running RTL level simulations is that it is faster to simulate, easier to debug and easier to modify. Just as working with a high-level programming language compares to working with assembler.

In addition, you can use non-synthesisable code to get something working for test or evaluation purposes even when the whole design is not ready or libraries not available.

Ideally, of course, both RTL and gate level would give the same results.

You can use lint tools to uncover many mismatch scenarios. Many modern simulators will also warn you (albeit you may be drowned in warnings).

The RTL/gate mismatches combined with slow gate-level simulation gave rise to equivalence checker tools.

There are many coding style issues (see Cliff's article), but many can be avoided by lint checkers.

Using 'x's can be useful in RTL, for example to uncover reset issues. However, care must be taken to make sure this sort of stuff is not part of the actual design. Use of any design code that relies or masks 'x's is definitely dangerous.

Using directives (full, parallel case, translate on/off) deserves extra scrutiny.

As a vaguely related amusing aside (which wasn't so amusing at the time), many years ago I worked on a pre-release commercial equivalence checker that a customer was using to reduce reliance on gate level simulation. Around their first tape-out I discovered a tool bug that tied some 'x's to zero by mistake and, to my chagrin, discovered that correct behaviour of the block relied on the value being zero, even though synthesis was free to pick either (since the designs were supposedly equivalent). I put in a frantic call to our AE who noticed the floating wire at placement and tied it to ground, 'just because it was nearby'! Close call.

copper.hat
  • 837
  • 7
  • 16