What is an Abstraction Model? How to ensure its correctness?

Answered by Anshul Jain | June 18, 2020

Short answer

An Abstraction Model can be a savior when checkers are inconclusive and stuck at insufficient proof depths. Abstraction Models combat proof complexity. They aid formal tools to reach deeper corners of a design in many fewer steps of formal search. They help to achieve sufficient proof depths for checkers and provide confidence that the last design bug has been found. Correctness of an Abstraction Model can be ensured by proving it against the un-abstracted design, making it a foolproof solution to formal complexity.

Read on for the complete answer…

When we take on the task of block-level formal sign-off, writing end-to-end checkers and proving them becomes a necessity. End-to-end checkers are hard to prove due to the fact that they depend on a much larger fraction of the design, as compared to local checkers or RTL assertions. Almost always, we expect them to run into formal proof complexity. That is, we will be unable to achieve the required proof depth (a proof depth that will not miss any bugs) without the use of Abstraction Models. How do we know that a checker is facing complexity? We just observe the depth-vs-runtime graph of the checker. If the graph has gone beyond the knee of the curve, and the current depth is not sufficient, then the checker has hit a complexity wall which we need to overcome. Such a situation leads to the need for Abstraction Models to combat the complexity and help the formal tool achieve the proof depth that we need. To see how Abstraction Models would do that, let us take a look at the process of creating effective Abstraction Models.

The process starts by identifying the hardest checker. The hardest checker is the one which is farthest from reaching the required proof depth. Once we have identified the hardest checker, the next step is to locate the source of complexity in the design. Common sources of complexity are: long input-to-output latency (pipelines), deep FIFOs, large counters, huge RAMs, lengthy linked lists and sequentially deep FSMs. Typically, a source of complexity either introduces deep state space that will take many cycles to cover, or adds so much logic in the scope of the checker that it will slow down each search step. An Abstraction Model overcomes such sources of complexity. It either provides short-cuts bringing the deep states closer to reset, thereby reducing the required proof depth of the checker. Or, speeds-up the process of verifying each step. In both the cases, it enables the tool to find failures faster and/or achieve the required proof depth.

Consider the example of a data integrity checker, which is identified as the hardest checker for a given block. Suppose that the block has a counter-based throttling mechanism which is irrelevant to the block-level function, but important for meeting a system-level requirement. This throttling mechanism limits the rate of outgoing packets from the block, introduces deep states for formal analysis and pushes the required proof depth to the higher side. Thus, the throttling counter is one of the sources of complexity. To resolve this, we analyse the RTL of the counter and craft an Abstraction Model for it.

The RTL implementation of the counter shows that:

  • It starts from zero after reset
  • It increments by one or rolls over to zero after reaching its max value (N-1) on increment
  • It takes N cycles to reach all the states of the counter (0 to N-1)
  • It exercises N state transition in total

Since throttling does not affect the function of the block being verified by the data integrity checker, we can create an Abstraction Model of this counter such that it implements the following properties:

  1. If (count != N-1), count value should increase on increment
  2. If (count == N-1), count value should roll-over to zero on increment
  3. Count value should remain unchanged when there is no increment

Through this simple Abstraction Model, we cover the actual function of the counter (as implemented in the RTL) and add more functionality, making it a superset of the design behavior. Namely:

  1. Count will be allowed to start from any value in the range [0, N-1]. RTL allows the count to start from [0] only. [0, N-1] is a superset of [0]
  2. Count will be allowed to increment by any value in the range [1, N-1]. RTL allows the count to increment by [1] only. [1, N-1] is a superset of [1]

One we replace the RTL of the counter with its Abstraction Model i.e., cutpoint the count signal and assume the properties on count signal written in the Abstraction Model, then packets are allowed to show up more often at the output and our hardest checker, i. e., the data integrity checker, need not wait for N cycles to check the correctness of the data. This brings the failures closer to reset and reduces the required proof depth of the checker.

Once we have used the Abstraction Model to resolve complexity, the next step we must take is to prove the correctness of the abstraction model by verifying it against the RTL. In this context, the goal of verification is to ensure that the Abstraction Model is really a superset of the RTL implementation. If we happen to create an incorrect Abstraction Model, when we run the properties of the Abstraction Model as checkers on the RTL, failures will show up with counterexamples detailing the design behavior implemented by RTL but restricted by the Abstraction Model. Since an Abstraction Model is proven against a significantly smaller portion of the RTL, complexity is a smaller hurdle in proving the correctness of the Abstraction Model.

Follow-up questions? — #AskOski