As in any engineering endeavor, formal verification involves engaging individuals in many different roles, often including formal managers and, given the technically deep and complex nature of FV, nearly always one or more formal experts. The manager of the formal verification effort on a project may have formal as his or her primary or sole responsibility, or may manage multiple aspects of verification (e.g. both simulation and FV). However, even a full-time formal verification manager may or may not drive the overall formal program in their company or organization.
In part one of this discussion, I talked about the emerging role of the Formal Verification (FV) Program Leader, an individual who enables and drives the formal process by navigating organizational dynamics, understanding designs and their verification complexities and schedules, developing and presenting ROI trade-offs, etc.; all to help achieve project goals. I listed six aspects of this role that the FV Program Leader must master in order to effectively lead the adoption and deployment of formal verification: Organization, Training and Upskilling, Test Planning, Progress Metrics, Sign-off Process, and Post Mortem Analysis.
In the previous blog, I talked in more detail about the first two aspects: the organization of formal, and the structure and growth of the internal formal team; and training and upskilling of that team, meaning internal vs. external, structure and style of training. What follows is a discussion of two additional aspects of the role of the FV Program Leader that are crucial to the effective deployment of in-house formal verification in any organization: test planning and measuring progress via quantitative metrics.
Test Planning: Any formal project should begin with an examination of the reasons why formal verification is being used in addition to simulation – is the goal to supplement simulation by finding some corner-case bugs, to provide an environment for designers to bring up their designs as quickly as possible, to thoroughly test a specific feature, to replace block-level simulation entirely, or to provide system-level architectural properties? Answering these questions, and documenting the plan of execution comprises this test planning stage. This includes selection of what level(s) of formal will be used, what blocks are the best targets for formal and will return the best bang-for-the-buck, detailed scoping of the formal effort for those blocks, and a written test plan and schedule for the formal work. It must be determined what level(s) of formal should be implemented: architectural formal verification, end-to-end formal verification of RTL blocks, and/or employment of formal “apps”. The expertise of the formal team available must be mapped to the specific formal tasks chosen. A significant decision to be made at this stage is how the system or subsystem should be partitioned into those portions to verify with formal and those to verify with simulation – some blocks yield a much higher return-on-effort with formal than others. Blocks chosen for formal must be scoped to determine the magnitude and complexity of the formal verification effort for them, and detailed lists of formal checkers to be written (i.e., the formal test plan) must be created. Depending on the organization of the overall verification team, the formal team might be encouraged to work closely with the simulation team; including re-using formal properties, especially constraints, in simulation; as well as addressing simulation coverage holes using formal verification. The formal engineers might need to collaborate with each other to re-use properties, especially across their design interfaces, particularly if an assume-guarantee methodology is being used. Decisions need to be made as to whether there is value in validating formal constraints in simulation, as well as whether formal coverage will be deployed to measure the completeness of the formal work. The FV Program Leader will typically drive and coordinate these activities, as well as mapping the formal test plans to the tapeout schedule. These test plans will include progress metrics that help measure the progress of the formal team.
Progress Metrics: Historically, an Achilles’ heel of formal verification has been the ability of the formal team to predictably converge on the formal verification goals laid out at the beginning of the project. In executing the test plans, formal verification engineers face considerable barriers to predictability that are unique to FV. These include coming up with a complete list of checkers (with or without designer help), converging on a set of constraints by working through a series of false negatives, achieving proof convergence especially in the face of the exponential proof complexity, and measuring formal coverage to justify the completeness of the formal work. Once test planning is complete and the formal project is underway in earnest, measuring and reporting progress metrics will offer insights into how the project team is doing, provide a vehicle to measure each formal area, and provide critical visibility for management into progress and completion of each piece of the formal project. These metrics will include things such as number/percentage of checkers written, passing and failing; abstraction models completed; proof depth(s) required and achieved; number of constraints passing or failing; and formal coverage metrics that have recently become available in formal tools. At this stage a knowledgeable FV Program Leader will be able to decide whether adequate human and tool resources have been assigned for formal, and whether they are distributed properly or need to be reassigned; and whether the division of blocks and/or allocation of resources to formal versus simulation needs to be reevaluated if one or the other is falling behind. Depending on the specific design as well as what is being verified on it, the amount of work can be dominated by different aspects; such as work on building the checkers, on converging on the right set of constraints, or on managing proof complexity. While the formal test planning stage is meant to predict which aspects might be the long poles, the metrics help to judge the accuracy of those predictions during the project; and determine whether course-corrections are needed. Another task of the FV program leader in this phase is to decide whether to use formal regression testing, not as universally used as in simulation.
The last two aspects of the FV Program Manager’s role are the Sign-off Process and Post Mortem Analysis, which I will cover in the third and final installment of this multi-part blog. Meanwhile, if you are interested in learning about the Oski formal verification methodology, join us on June 9 for our Decoding Formal Training Day at DAC 2016 in Austin, Texas, where we will dive deep into end-to-end checkers, abstractions, managing proof complexity and formal coverage.