5

Now days SystemC or SystemVerilog are used for verification of complex designs, especially for things like SoC designs that are really complex. I do know that these languages bring in the OOP design techniques into the digital IC design domain.

What I don't know is exactly how do they make things easier when it comes to verification. I want to see an example side by side e.g HDL vs SystemC/SystemVerilog. Is there some resource that I can use to understand this?

quantum231
  • 11,218
  • 24
  • 99
  • 192
  • heh ... you might be interested in taking a look at http://www.osvvm.org –  Mar 27 '14 at 11:32
  • 1
    I was looking for SystemC/SystemVerilog though – quantum231 Mar 28 '14 at 12:46
  • 1
    You were asking to compare the two approaches; that must mean you need to see it from the HDL perspective as well as the SystemC perspective. I'll let the SystemC experts provide the latter, but if you want to see a hatchet job on VHDL, look at Janick Bergeron's book "Writing Testbenches. –  Mar 28 '14 at 14:18

2 Answers2

8

SystemVerilog included a range of new features intended to improve verification productivity and the most significant are probably:

  1. Object Oriented programming
  2. Constrained randomsation

Functional verification in simulation is entirely a software problem, so by including classes the verification community acquired all of the productivity gains of traditional OO programming (although from ~10 years ago - the lack of reflection and limits of introspection mean that libraries written in SystemVerilog like UVM are massively dependent on the pre-processor rather than the language itself).

By adding constrained randomisation, the language enforces compliant simulators must provide a constrains solver. This allows the testspace to be defined by a set of rules and the simulator to generate test sequences. A classic analogy would be Sudoku, by defining the problem in terms of constrains the simulator can solve the problem rather than the verification engineer having to think of stimulus to exercise the DUT. Typically this makes it possible to hit corner cases that might have been missed with directed testing.

Another improvement in SystemVerilog was the DPI layer, which makes it easier to interface to external languages.

SystemC is a slightly different beast - it can be used for modelling, interfacing to other languages, high-level synthesis etc.

You asked for a comparison of code; I use Python for verification so I don't have any good System[C|Verilog] examples, but Python is an OOP and therefore this example might be useful.

From a testbench I created of the OpenCores JPEG encoder:

def compare(i1, i2):
    """
    Compare the similarity of two images

    From http://rosettacode.org/wiki/Percentage_difference_between_images
    """
    assert i1.mode == i2.mode, "Different kinds of images."
    assert i1.size == i2.size, "Different sizes."

    pairs = izip(i1.getdata(), i2.getdata())
    dif = sum(abs(c1-c2) for p1,p2 in pairs for c1,c2 in zip(p1,p2))
    ncomponents = i1.size[0] * i1.size[1] * 3
    return (dif / 255.0 * 100) / ncomponents


@cocotb.coroutine
def process_image(dut, filename="", debug=False, threshold=0.22):
    """Run an image file through the jpeg encoder and compare the result"""

    cocotb.fork(Clock(dut.clk, 100).start())

    driver = ImageDriver(dut)
    monitor = JpegMonitor(dut)

    stimulus = Image.open(filename)
    yield driver.send(stimulus)
    output = yield monitor.wait_for_recv()

    if debug: output.save(filename + "_process.jpg")

    difference = compare(stimulus, output)

    dut.log.info("Compressed image differs to original by %f%%" % (difference))

    if difference > threshold:
        raise TestFailure("Resulting image file was too different (%f > %f)" %
                          (difference, threshold))

You can see that the use of a structured testbench and OOP makes this code very understandable and quick to create. There was a pure verilog testbench for this block and although it doesn't have the same level of functionality the comparison is still interesting.

Chiggs
  • 670
  • 4
  • 5
2

I have written directed testbenches for small designs in Verilog at university, and then worked on SystemVerilog UVM environments for larger designs. Directed testing does not scale and does not promote reusability as much as UVM does. OOP promotes reusability, and UVM provides a component based methodology for verification. In UVM, the components can be reused, thus scaling for larger designs by reusing components instead of writing plenty of directed tests. UVM provides a standard methodology, allowing one to write and reuse components that are guaranteed to interoperate seamlessly. In my experience, the major advantage that SystemVerilog brings in addition to constrained randomization and OOP, are features for modelling functional coverage. Functional coverage is a measure of completion of testing the design's functionality. For a large design, functional coverage provides us a quick view of the status of testing the design's functionality.

Functional coverage is modelled using coverpoints and covergroups. Coverpoints and covergroups are written by the verification engineer. Coverpoints and covergroups reflect the verification plan. Verification planning is a manual process in which the verification engineer thinks about edge cases and other important test cases for the design. It is not possible to simulate all possible values of a large digital design, so important test cases and edge cases are tested. In addition to this, constrained random generates test conditions which haven't been thought of by the verification engineer.

An internet search revealed that Siemens EDA's Questa InFact solution provides these features:

  • Providing automatic generation of covergroups from the constraints in the testbench.
  • Modifying stimulus to achieve functional coverage more quickly than constrained random.
  • Augmenting directed tests with functional coverage and additional test cases.

Source: (requires sign-up with a work email address)

  1. https://verificationacademy.com/verification-horizons/june-2012-volume-8-issue-2/Automated-Generation-of-Functional-Coverage-Metrics-for-Input-Stimulus
  2. https://verificationacademy.com/verification-horizons/june-2012-volume-8-issue-2/Is-Intelligent-Testbench-Automation-For-You

Another advantage that SystemVerilog brings is SystemVerilog Assertions (SVA), a language used for specifying properties that cannot be specified in a way that is easy to understand using HDL. Using SVA, specifying multi-cycle properties becomes natural, due to the syntax of SVA.

SVA can also be used in the formal verification flow. Formal verification is a methodology that uses SAT/SMT engines to prove the correctness of properties of a digital design, without using simulation.

SystemC provides a library to write loosely timed and approximately timed models, which run faster than RTL simulations and take lesser time to write. These models can be used for performance verification during the early architecture exploration phase. These models can then be reused in the UVM environment as a reference model, eliminating the need of writing reference models in SystemVerilog again.

SystemVerilog UVM also provides a library for register verification, which is very useful. The UVM register library makes it easy to set and read registers both through backdoor (direct operation on the register) and frontdoor (through the design as it would be done in real life).

In short, these languages provide constructs that make it easier to write scalable and reusable verification environments.

Shashank V M
  • 2,279
  • 13
  • 47
  • 1
    How is functional coverage defined? Isn't this a completely manual process and thus error prone? – quantum231 Nov 15 '21 at 10:50
  • 1
    @quantum231 functional coverage is written according to the test plan. This has to be written carefully. An internet search revealed that there is one tool to automate the process of writing functional coverage. See my updated answer for details on this – Shashank V M Nov 15 '21 at 13:51