In this post we discuss the differences and similarities between the architecture of formal and simulation testbenches. Many organizations are using formal verification as a supplement to existing simulation efforts. Internal design assertions or interface assertions are often ad-hoc, and form the list of checks being verified. Corner-case bugs can be detected with such efforts, but often at a cost of additional time and resources, and with no guarantee of overall savings on verification effort or improvements in the planned schedule. In contrast, some organizations have begun to architect formal testbenches early on in the design phase, with the goal of complementing or even replacing selected simulation effort, for end-to-end formal verification. Such an endeavor requires a formal testbench design that is similar to a simulation testbench design.
A simplistic design of a simulation testbench is shown here. Bus Functional Models (BFMs) provide a set of methods and procedures to drive legal traffic at the inputs of the design under test (DUT). Directed tests or constraint-based sequences, written using these methods, exercise the inputs for different traffic patterns. Checkers, sometimes directly integrated into BFMs and sometimes written separately and instantiated stand-alone, are used to check the correctness of the design behavior. For operational or algorithmic designs, the checkers may sometimes include a reference model in C/C++/SystemC; for data transport or control-dominated designs, checkers may specify the functional requirements implemented in Verilog or SystemVerilog. The DUT may also contain internal or interface assertions implemented in SystemVerilog Assertion (SVA) language. Lastly, coverage models are used to determine the completeness of a testbench. Line and expression code coverage metrics are popular, and supported by all commercial simulators. In addition, functional coverage targets can be specified and implemented manually. Of course, coverage only measures the completeness of the input space, and does not determine the quality or the completeness of the checkers; this is a topic we will explore in a future post.
Using a Formal Testbench for End-to-End Verification
A formal testbench for End-to-End formal verification
When formal is used for End-to-End verification, the formal testbench resembles a high-level simulation testbench. Instead of implementing the input behavior operationally using BFMs, a formal testbench uses a set of constraints to declaratively define the legal input behavior. Given enough runtime, any input pattern that does not violate the set of constraints will be verified by the formal tool; in practice, with a practical runtime limit, bounded depths will be covered. A set of checkers describes what the formal tool will verify. In order for the formal testbench to be comparable to an equivalent simulation testbench, this set of checkers ought to be complete enough to be comparable to the list of checkers used in simulation.
One limitation of formal verification methodology is that the checkers have to be written in synthesizable Verilog or SystemVerilog, along with a layer of SVA, which may seem an onerous requirement to someone unaccustomed to writing synthesizable Verilog. More importantly, to manage formal verification complexity, checkers are often written in a different style than simulation checkers, sometimes checking outputs bitwise, and sometimes checking them symbolically. The coverage models that can be used with formal tools can be exactly the same as the ones used with simulation. And coverage provides the same guidance: it determines the completeness of the constraints, and determines what fraction of input space was exercised with the amount of run-time available to do the bounded proofs; and just like simulation, coverage metrics provide no guidance on the completeness of checkers. The diagram also shows that use of optional Abstraction Models, that are used only to accelerate the formal proofs, and avoid intractable runtimes.
Planning, Strategy and Abstraction Models to Reduce Complexity
Of course, a natural question is whether or not it is always practical to replace a simulation testbench with a formal testbench, given the complexity barriers with formal verification. The answer is no. Careful up-front planning must be used to determine which designs are better suited to formal verification than simulation. Also, the level of hierarchy at which formal should be applied should be carefully selected — it could be at the same level of hierarchy as the lowest-level simulation testbench, or one level lower, but no more. Most importantly, planning should result in a strategy to implement Abstraction Models, which are key to reducing the complexity of formal proofs by speeding up run-times thousands of times. More on Abstraction Models, in a later post.