Complex IP Verification Methodology Using Property Driven Simulation in IEV

by Deepanjan Roy, Abhishek Datta; Nvidia & P. Lokesh Babu, Cadence Design Systems , TechOnline India - March 21, 2012

In this paper, we try to address classical approaches and draw-backs of applying formal verification/model checking to a new design and how property driven simulation can help cross those barriers.

Formal Verification [FV] is characterized by its exhaustiveness and the inherent state-space explosion problem. The latter implies that just capturing specifications in form of assertions is not enough. Optimized environment modeling, compositional abstractions are frequently required; else non-convergence of the properties results in diminished confidence in the verification.

An intuitive solution towards improving confidence is to combine FV with simulation. Cadence Incisive Enterprise Verifier [IEV] allows for both classical Model Checking [MC] and Property Driven Simulation [PDS]. In this paper, we try to address classical approaches and draw-backs of applying FV/MC to a new design and how PDS can help cross those barriers.

There are three major steps involved in IP verification by MC. The first step is to identify all high level functions that need to be verified. This is essentially identifying/breaking design into several high level functions and needing to judge which function can/should be targeted by FV. This will help to capture transaction level assertions. The second step is to define constraint model for this IP to create an optimum but functional environment. The last step is bug hunting or proving those transaction level assertions. These high-level assertions are more effective than local assertion for finding bugs, but of course they will be much harder to prove. 

PDS helps extensively to define constraint model. Over constrained environment is absolutely undesired for FV and in fact it is a very important to establish the fact that environment is not over-constrained. In a PDS environment over-constraint will cause “dead-end” and this is the most effective and practical way of detecting over constraint. A major whole in the constraint model or wrong constraint can easily be detected by debugging transactions in long simulation waveform produced by “search & debug -solver” feature in IEV PDS.

An expected transaction waveform is the biggest confidence booster for the constraint-model. In pure formal constraint-model, development starts with a heavy under constraint environment and way to find over-constrain is proving AFAs which is time consuming and involves too many iteration to close it. Formal counter-example is small; involve COI signals and internal signals which make debugging even tougher than PDS. Unlike formal PDS constraint model development is independent of assertions quality/quantity.

Once the constraint model is ready [functional and optimal], this long simulation debug waveform will help to characterize design properly. It will help to understand how a top level function is implemented inside the design. It will help to identify different stages of design, control flow, data flow and their correlation. It will provide the design knowledge required to break assertion or design or perform abstraction for Formal Analysis. It will help to analyze design and find out sweet-spot for Formal Analysis.

PDS is very powerful for flushing out not only the environment or understanding related issues but also basic real design issues. This is very effective to find out most of the protocol related violations, basic control-data correlation related issues, transaction drops etc. The real power and distinguishable feature of Cadence IEV is it allows basic simulation biasing by the means of “soft-constraint”. Soft constraints are honored during simulation and have no effect on Formal Analysis. If there is a conflict with normal constraint, soft constraints are ignored even during simulation also. This allows tuning simulation to find or reproduce real complex bugs.

The sign-off procedure involves coverage matrix like general simulation when none of assertions are failing. Cadence has an auto-coverage flow. This flow has two stages. In 1st stage selective/all AFAs are enabled and simulation is used to cover easy to hit targets. User can mention no of simulation runs, but can’t detect paths. Tool internally will find out most effective paths to cover maximum AFAs. Formal analysis is run on uncovered targets.  No user inputs are taken for formal analysis at this stage.

The second stage is for expert formal mode to cover hard unreachable targets. Here all sorts of user inputs are entertained like engine, effort settings etc. Witness corresponding to each formal AFA covered is stretched and test case created. Suppose there are “N” no of simulation run and “M” no of AFAs covered at 1st stage and “K” no of AFAs covered at 2nd stage then total no of test case(T) is sum of M,N,K (T = M+N+K). Coverage analysis is an important part of this. The module got lower coverage required formal analysis. Simulation is no longer now constraint-random; formal is assisting simulation to find out interesting corner-cases to cover all possible structural solution set.

Memory controller IP is used as a case study. It has three major types of interfaces, named multi-cycle request and credit-based client interface, DDR interface and ARC-interface [on chip memory interface]. Transactions initiated by client pass through FIFO, multistage complex hierarchical arbiter and winner goes for address translation. Virtual to physical [linear] address translation done through address cache and then it get again translated to DRBC address and pass through a decoder. A good transactions then pass through a WCAM [sparse array] and highly optimized and very efficient transaction sorter to meet specification of DDR most effectively while a bad transaction get dropped.



Figure 1: Simplistic Block Diagram of DUT

 

This design will benefit a lot from Formal Analysis because of its concurrency, complex arbitration mechanism, transaction merging and re-ordering etc. Classical verification techniques based on simulation are not enough to verify it because it is very difficult to instrument simulation to have better reachability deep in the design complex re-ordering structure. It is very difficult to apply FV on this design because of the huge number of sequential elements and presence of various large counters for meeting DDR specification. It requires very detail design knowledge to break this design or perform any abstraction on design or assertions. Just for one transaction to push through the design required lots of register programming and counter abstraction.

PDS helped to create assertion based environment by modeling 3 major interfaces. Basic PDS search was effective enough to flush out all environment related issues. PDS helped for design abstraction. Client FIFO and write buffer was black-boxed. This design was broken into 3 logical partitions and sweet-spot for formal analysis like multi-stage arbiter, WCAM, address cache, address translation function, transaction sorter was detected. Long simulation trace helped to characterized design better and just eye-balling found blocks for forward progress. Waveforms are totally justified and it contains all signals unlike formal waveforms which are short and contained only COI signals. It helped to do counter abstraction required for formal analysis.

In order to facilitate out of order transactions several free-list was maintained and design was taking initial 64 cycles to program that free-list, no transactions were send out from design during that period. This helped to program optimum values for DDR related configurations. Typically high value to them is safe but bad for formal analysis. A random but fixed DRBC address was color coded and attributes of transactions send to that address was constrained from any client. Checkers are implemented at DDR/ARC interface to observe them. This kind of formal score-boarding in form of assertions that capture transaction behavior to check functional correctness is hard to prove in pure formal analysis and hence avoided.

Four functional bugs were caught during this analysis on this heavily simulated design. One of them is for multi-cycle response design expects response attributes to be stable, otherwise 2nd data was routed to wrong client. Simulation failed to catch this because typically simulation environment is over constrained and BFM was keeping attribute stable during response phase on the other hand it was hard to capture in formal analysis because it required a read transaction to flow through the design which required more than 70 cycles bounded proof which is very hard to achieve while we got it by PDS as a protocol assertion failure for one client.

Here is our proposed flow for IP verification using PDS and MC:

 


Figure 2: Proposed IP Verification Flow using PDS and MC

 

This is a two step approach. PDS is used in the first step, which allows creating environment and checkers in parallel because of not having any mutual dependencies. Environment issues and checker issues can be resolved with basic PDS runs. Any failure can either detect a environment/checker issue or basic bug.

Once stable infrastructure is setup advanced PDS runs are executed. Simulations can be heavily tuned according to the focus and search-points can be mentioned to run formal around. Any failure at this stage should indicate valid bug. After enough advanced PDS runs once no assertions are failing coverage closer is done through two stages and assertion are replayed during this formal assisted simulation. At the second stage, model checking is used.

 

About the authors:

Deepanjan Roy is Formal Verification Engineer, Nvidia and can be reached at droy@nvidia.com; Abhishek Datta is Formal Verification Engineer, Nvidia and can be reached at abhishekd@nvidia.com . P. Lokesh Babu is Verification Tech Lead at Cadence Design Systems and can be reached at lokeshp@cadence.com.

 

Comments

blog comments powered by Disqus