Using verification coverage with formal analysis

by Vinaya Singh and Joseph Hupcey III, Cadence Design Systems, Inc. , TechOnline India - April 14, 2011

Three primary questions emerge during verification and coverage collection with formal analysis. These questions are: How good are my formal constraints? • How good is my formal verification proof? • How can I feel confident my verification is complete? Here are the answers.

Introduction

Verification engineers are increasingly using coverage metrics such as code coverage and functional coverage to guide the verification process to completion. These metrics, however, were developed specifically for simulation. Many contemporary verification flows also include formal analysis tools that provide exhaustive block-level proofs based on properties or assertions. The level of coverage provided by these tools needs to be evaluated, too – but it’s necessary to understand how formal “coverage” differs from simulation coverage, and how formal coverage results can reinforce, or in some cases even replace, coverage created by simulation engines.

In metric-driven verification flows, an executable verification plan tracks simulation coverage metrics on an ongoing basis, using the metrics to evaluate the completion of the verification process. As a result, engineers can quickly see whether a block is completely verified, or if further tests are needed. Steps of the process include developing the verification plan, constructing tests, executing tests, and measuring and analyzing coverage metrics. The following diagram provides a high-level view of this process in the context of assertion-based verification (ABV):

                             

                            

 

Figure 1 – A plan and metric-driven verification flow in the context of assertion-based verification, with execution of assertions and results tracking in a unified view.
(Source: Cadence Design Systems, Inc.)

As we’ll show in this article, this metric-driven framework, and all the terms and methodologies you are familiar with when writing simulation testbenches and/or constrained-random stimulus in e or SystemVerilog -- terms like "constraints'" "code coverage," and "functional coverage" -- have essentially the same meaning in the formal-centric assertion-based verification world. In short, when formal analysis is included in coverage metrics, verification teams can “check off” blocks when they’ve been completely verified with formal methods, or discover gaps in coverage that need to be filled with simulation or formal tools.

Assertions are primary means to leverage formal analysis. In fact, assertions are written to capture three phases of verification -- the test environment (assertions as constraints), checks (assertion as checks) and coverage (assertions as functional covers). Formal analysis allows verification and coverage collection starting from the development of the test environment. These three phases, along with verification and coverage collection, are shown in the following figure.

 

                              

 

Figure 2 – Assertions serve several roles in the verification process. (Source: Cadence Design Systems, Inc.)

 

Three primary questions emerge during verification and coverage collection with formal analysis. These questions are:

• How good are my formal constraints?
• How good is my formal verification proof?
• How can I feel confident my verification is complete?

Let’s address these questions one-by-one.

 

How good are my formal constraints?

The term “constraints” in this question refers to the development of constraints on the universe of possible stimulus. As alluded to above, this is conceptually similar to a constrained-random dynamic simulation stimulus setup. However, in this process, the user describes valid behavior of inputs using Property Specification Language (PSL) or SystemVerilog assertion (SVA) properties. As you might expect, if the inputs you want to drive are for popular standard protocols (OCP, AHB, etc.) then commercial verification IP components are likely available to give you a running start.

Once your PSL/SVA “properties” (also called “constraints”) are written, you apply the formal engines to generate traces from these constraints and start running them on the design. Again, this is similar to running tests in simulation – it's just that in this case it's the formal engines doing the work versus an event-driven simulator. Once you are engaged in the process, inevitably you will need to correct and refine your constraints as design problems and bugs become apparent.

In particular, once you are up to speed it's clearly desirable to assess exactly how good or bad your constraints are. Specifically, are constraints hiding some legal behavior ("over constraining") or are constraints allowing some illegal behavior ("under constraining")?

Modern dual-engine formal and simulation tools such as the Cadence Incisive Enterprise Verifier can generate a “witness trace” for functional coverage points in the design. Derived from the assertions themselves, witness traces are sequences of values that guarantee that a coverage point can be hit. If a witness trace is replayed in simulation, it can activate coverage points and show whether coverage points are reachable.

You can make sure you’re not over-constraining by generating cover and witness traces from your constraints. These traces automatically check if all lines of the RTL design can be reached (a “dead code” check) and all states of finite state machines are traversable. An advanced formal verification tool will automatically generate cover and witness traces for you.

With this check you are essentially running a machine-generated “directed test.” The best part (and the advantage of formal over dynamic simulation) is that because formal algorithms are mathematically exhaustive, you can conclusively fail a dead-code check. For example, formal can tell you with mathematical certainty that there is no legal stimulus possible that could hit that line of code in RTL. If all dead-code checks and FSM-state checks pass, then we know that constraints are allowing sufficient traces to be generated.

At the opposite extreme, the constraints could still be too "loose" and allow some illegal traces to be generated. In short, if your constraints allow some illegal behavior then your property checks will fail. In this case, modify the offending constraints to fix this issue. Fixes for constraints and device under test (DUT) bugs are handled in the same way during this phase. Of course, your fixes could go too far and create an over-constraining situation as described above. Thus it is best to do reachable coverage collection on regular basis -- that is, re-run the above analyses on a periodic basis to make sure ongoing design and ABV activities don’t break anything – just like you do with simulation regression testing.

 

How good is my verification proof?

In a real sense this is almost a trick question: if all your assertions are passing, then it is guaranteed to not fail on all traces generated from constraints. These checks are mathematically verified on all traces (or “machine-generated directed tests” if you prefer). Rephrasing in official formal verification jargon, "the proofs are sound for all reachable coverage behavior." All the coverage metrics generated during “constraint development” are applicable for passing assertions. Furthermore, functional coverage points can be automatically generated for the checks. For example, if you had a PSL check like this:


//psl A1: assert always ( {req} |=> {req[*1:5]; ack && !req}) @(posedge clk);

Incisive® Enterprise Verifier will automatically generate trigger "cover ({req} @ (posedge clk))" and witness "cover {req;req[*1:5]; ack && !req} @(posedge clk)". To further your confidence, you can have the tool display the corresponding waveforms showing the check was enabled and finished once, so you can completely confirm that there is no basic flaw with the check itself.

However, there is one possible remaining "gotcha" at this stage -- what about checks that come up inconclusive or "Explored?" Since proofs provide mathematical certainty, the best initial courses of action are almost always to either give the tool a bit more time to run, try some different formal algorithms, or apply some basic abstractions to improve the chance of getting a proof. Granted, you might still be left with explored checks no matter what you try -- but all is not lost! With an advanced dual-engine tool such as Incisive Enterprise Verifier, you can easily shift over to the simulation world and apply simulation-centric coverage-driven verification approaches. This process is implicit in the answer to our last qualifying question.

How can I feel confident my verification is complete?

Often the cause of an “Explore” is that the user has specified a case that’s very deep in the state space – too deep for a formal algorithm to solve in a given amount of time. Fortunately, one can effectively daisy chain simple scenarios into a complex case -- a technique called “guide pointing” -- such that the end of each scenario becomes a starting point for the next scenario. Consider the following example PSL code of a FIFO check and cover group:

    // psl cover_fifo_full: cover {full};
    // psl cover_guide_1: cover {(~rd_en && wr_en)[*100] };
    // psl cover_guide_2: cover {(~rd_en && wr_en)[*200] };
    // psl cover_guide_3: cover {(~rd_en && wr_en)[*300] };

In above statements, in addition to "cover_fifo_full" cover, other sub-scenarios (or sub-goals) "cover_guide_1" to "cover_guide_3" have been described as well. Next, the command console snippet below shows an example of the guide pointing process in action. Specifically, the sub-goals are added as guide using command "assert -add cover_g* -cover -guide". Now, in the "prove" run the cover "cover_guide_3" is used as a stepping stone (or guide point) to reach the final cover "cover_fifo_full".

FormalVerifier> prove
Modeling check mode:
  Vacuity check finished
Verification mode:
  cover_guide_1 : Pass (200)
  cover_guide_2 : Pass (400)
  cover_guide_3 : Pass (600)
  cover_fifo_full : Explored (532)
Assertion Summary:
  Total                  :   4
  Pass                   :   3
  Explored               :   1
  Not_Run                :   0
FormalVerifier> assert -add cover_g* -cover -guide
formalverifier: *W,ALMAAS: "cover_guide_1" is already marked as an assertion.
formalverifier: *W,ALMAAS: "cover_guide_2" is already marked as an assertion.
formalverifier: *W,ALMAAS: "cover_guide_3" is already marked as an assertion.
0 assertions added.
3 guides added
FormalVerifier> prove
formalverifier: *W,MULCGD: Multiple conclusive guides found. Guidance will start from last conclusive guide "test.cover_guide_3".
Verification mode:
  cover_fifo_full : Pass (1026)
Assertion Summary:
  Total                  :   4
  Pass                   :   4
  Not_Run                :   0

In above statements, in addition to "cover_fifo_full" cover, other sub-scenarios (or sub-goals) "cover_guide_1" to "cover_guide_3" have been described as well. Next, the command console snippet below shows an example of the guide pointing process in action. Specifically, the sub-goals are added as guide using command "assert -add cover_g* -cover -guide". Now, in the "prove" run the cover "cover_guide_3" is used as a stepping stone (or guide point) to reach the final cover "cover_fifo_full".

With this guide pointing technique, all the cover point waveforms are generated now. Furthermore, now that you have developed scenarios, you can enable your checks and run tests of scenarios to see if your checks are also validated in simulation. All simulation coverage data (code coverage, functional coverage) is generated too, and you can re-use/reference these same cover points in simulation-oriented coverage and planning tools as well.

Let's recap: we've shown how to combine mini-scenarios into a deep scenario. The cover statements are comprised of Property Specification Language (PSL) or SystemVerilog assertions (SVA), which implicitly capture these scenarios. Multiple, dual technology-based methods are available to test them. Using this technology you can sign off your verification with exactly same functional and code coverage goals as in simulation. In fact, formal analysis assists you in scenario development and you could achieve much higher functional coverage goals much more quickly.

 

Putting it all together

Because a given set of coverage points can be exercised by formal analysis, traditional simulation, or both on a 1:1 basis, the results from two very different algorithmic approaches now be appropriately compared and evaluated on an “apples to apples” basis in real time. This conceptually simple merging of capabilities leads to profound, never-before available benefits to end users:

-- At a glance, users can see if a given technology (formal or simulation) has reached a solution, and thus they can decide whether to stop right there and move on, or continue to expend resources running the other technology to completion. In particular, if the formal engines reach a solution first, because formal executes an exhaustive proof the user can be very confident in accepting the result and not spending more time on further simulations and/or bothering to write testbenches for the covered area of the circuit.

-- A failure observed by the formal engines can also export metrics such that one can identify circuit behaviors that are “unreachable” – i.e. there is absolutely no combination of simulation stimulus that can reach the state spaces to be verified. Clearly this knowledge can be a huge time saver since users can stop trying to concoct simulation stimulus to reach a particular case, and can instead initiate other corrective actions.

-- Augmented by a “replay” capability to drive simulation stimulus derived from assertions, simulation code and functional coverage points can be easily monitored with this technology.

-- All of the above leads to significantly improved resource allocation – both in the upfront verification planning process as well as the coverage filling evaluation phase – as end-users and verification managers can now have objective, unambiguous information about which technology should be applied at a given time and/or to a particular area of the design to get maximum verification benefit.

 

                             

 

Figure 3 – Formal and dynamic simulation results can be automatically back-annotated to the verification plan so they can be compared on an “apples to apples” basis. The benefit: Formal engines can identify unreachable states, while simulation can cover cases that are difficult to prove with formal analysis alone.
(Source: Cadence Design Systems, Inc)

Conclusion

While formal analysis provides a 100% exhaustive proof, it’s only as good as your constraints. Use formal coverage as described here to evaluate constraints and proofs, and you’ll have the confidence to state that verification is complete. In addition to developing this confidence, formal methods can determine whether coverage points are reachable, helping the overall verification coverage filling process. When combined with metric-driven planning, management, and results viewing, users can mutually reinforce the strengths of two very different verification technologies. In turn, this provides users with clear, actionable information never before available.

About the authors:

. Vinaya Singh, R&D architect, Cadence Design Systems Inc.
. Joseph Hupcey III, product management, Cadence Design Systems Inc.

Comments

blog comments powered by Disqus