How formal MDV can eliminate IP integration uncertainty

by Raik Brinkmann , TechOnline India - January 12, 2012

This article outlines how the latest formal metric-driven verification (MDV) methodology and technologies can eliminate integration uncertainty through the automatic generation of Accellera-defined coverage metrics, without the assistance of simulation. This formal MDV methodology measures not only the usual control coverage, but also observation coverage — a serious missing link in many other MDV approaches. The methodology is easily integrated into existing MDV flows or can be used stand-alone.

The increased deployment of silicon intellectual property (IP) blocks is vital to boosting productivity in the development of large, complex system-on-chip (SoC) designs. But the increase in SoC design productivity is not matched by as great an increase in SoC verification productivity. Managers and engineers still struggle with a persistent “verification productivity gap.” Why? Because there is a persistent IP verification quality gap, too. The resulting uncertainty about the original verification quality of individual IP blocks often requires time-consuming remedial verification by the SoC design team. The alternative is to risk SoC design failure because of inadequate IP verification, which ultimately delays the project even more.

This article outlines how the latest formal metric-driven verification (MDV) methodology and technologies can eliminate integration uncertainty through the automatic generation of Accellera-defined coverage metrics, without the assistance of simulation. This formal MDV methodology measures not only the usual control coverage, but also observation coverage — a serious missing link in many other MDV approaches. The methodology is easily integrated into existing MDV flows or can be used stand-alone.

The usual questions

Before integrating an IP block, designers need reliable answers to the usual questions regarding its quality: How thoroughly has the IP block’s register transfer level (RTL) functionality been verified? Under what constraints and conditions was it verified? How accurately do those constraints and conditions reflect the operating environment into which the IP block will be integrated? In addition, has the specification been completely implemented in RTL code? And, is the specification complete and correct in the first place?

The questions and uncertainties are exactly the same regardless of whether the IP is sourced from a third party or internally. And they are no less difficult to answer when a verified, working IP is modified for a different use. The answers are considerably more difficult when the IP is configurable for multiple, different uses in different chips. So, how can designers eliminate these uncertainties?

The missing link: observation coverage

Designers generally look to the IP’s verification plan and its simulation/testbench results to try to answer the usual questions. Functional coverage points can be used to ensure that functionality derived from the specification is triggered, so it’s possible to get some idea of the functional coverage. However, applying the coverage point approach in the comprehensive manner necessary to ensure reliable verification is both difficult and time consuming.

Consequently, to assess verification quality and progress, verification teams use a plethora of diverse, industry-standard structural coverage metrics such as line coverage, statement coverage, block coverage, expression coverage, branch/condition coverage and finite state machine (FSM) coverage {1}.

However, standard structural coverage metrics measure only control coverage, that is, the degree to which stimuli exercise the RTL code. Control coverage indicates which parts of the code have been exercised and helps to answer the question, “Have I written enough stimuli?”

But when a stimulus has triggered a fault, how do we know whether the fault has been observed by a checker? Observation coverage identifies which parts of the code have been checked and which parts have not, and helps to answer the question, “Have I written enough checks?”

Standard coverage metrics don’t measure observation coverage. Control coverage is necessary, but not sufficient, to ensure observation coverage, but thorough observation coverage analysis is necessary to ensure that there are enough of the right checkers in the right places. (Mutation analysis can mitigate the observation coverage problem, but it is not a complete solution because it requires pre-defined fault models and still relies on simulation.)

Consequently, even if verification has achieved 100-percent control coverage — for example, each line of RTL code has been exercised at least once — it hasn’t necessarily achieved the desired observation coverage, or even any at all. The inability of many MDV approaches to measure observation coverage is thus a serious source of uncertainty.

This is why dynamic verification famously suffers from uncertainty about precisely how much of the RTL description has actually been verified and about precisely which parts of the RTL description have yet to be verified. Nonetheless, engineers rely on a mix of these control coverage statistics, together with the variation in bug detection rates over time, to make a “verification done” decision. However, in the absence of any reliable measure of observation coverage, this decision is more judgment call than analytical determination.

Formal’s historical limitations

Historically, formal methods have played a lesser role in answering the usual questions. Why? After all, the exhaustive nature of formal property analysis should provide good answers to most of them. In particular, formal methods have the potential to verify and measure the degree of both control and observation.

One of the problems was that tool ease-of-use issues and limited capacity made it awkward to apply formal methods to large blocks, and even more difficult than using testbenches to calculate functional coverage. Consequently, formal was limited to the verification of hard-to-reach code, corner cases, and specific domains such as communications protocols. This case-specific use of formally checking assertions can eliminate a great deal of iterative testbench enhancement and refinement. Instead of using directed tests or automatically-generated random tests, and increasing the number of cover properties, we simply write an assertion that expresses the intended design behavior or specifies undesirable behavior. This verification of behavior is independent of whether it is simulated with the assertion or formally verified. Nonetheless, formal’s use as a situational add-on to simulation mitigates IP integration uncertainty only incrementally.

In recent years, though, the capacity of formal technology has been significantly increased, and its ease of use has been improved by, for example, the establishment of standard assertion formats such as SystemVerilog Assertions (SVA).

But to apply formal methods to the verification of entire IP blocks and to answer the usual IP verification questions, formal technology must do much more than simply augment simulation. It must become a comprehensive methodology, capable of whole-block verification, both stand-alone and, where desired, in cooperation with dynamic verification. And, it must transform the “verification done” decision into an analytical decision by verifying and measuring coverage ¬— both control and observation — accurately, consistently, and reliably.

Using formal to verify IP blocks

So, what happens when you make extensive use of assertions in the verification of entire IP blocks? How do you measure verification quality and progress? The industry organization, Accellera, has defined several coverage metrics for this purpose. However, historically, formal technology hasn’t been able to quantify them analytically — a deficiency that increases uncertainty.

In an effort to fill the formal metrics gap, the industry has introduced assertion density, which is the number of assertions per line of RTL code {2}. In the absence of any other metrics, measuring assertion density is an “any port in a storm” approach. It may well create a sense of security, especially when the assertion count is significantly increased by automatic assertion generation. But it conveys nothing about the quality and efficacy of the assertions, or about assertion redundancy, or about the progress of the verification. Worse, it tells us nothing about the location of verification holes remaining in the RTL — the very information required to thoroughly verify the RTL. Of course, it’s possible to calculate some kind of coverage number by simulating tests derived from the formal assertions. But this is a tedious, time-consuming task when there are more than a few assertions. In any case, it has very limited value.

To eliminate IP verification uncertainty using formal verification, we need a structured approach that establishes a clear correspondence between the assertion set, the functions defined in the specification, and the verification plan. One such structured approach involves the use of assertions that express functionality at the specification level — the level of transactions and operations. Operational assertions enable the verification team to informally assess progress and identify functionality that has yet to be verified. This step puts formal verification on a par with testbench approaches.

But modern formal methodology can do more. It can now go beyond the benchmark set by testbenches. Formal can now measure progress analytically and automatically, using the Accellera-specified coverage metrics. It can measure and analyze both control coverage and observation coverage to achieve 100-percent verification. Moreover, it can identify gaps in the scenarios covered using formal gap detection and analysis, enabling the team to write the “missing” assertions. Indeed, we can take gap analysis to its logical conclusion — to identify all such gaps, enabling us to construct a gap-free assertion set that proves that 100 percent of the specified functionality has been captured in RTL. The set is, in effect, a functional reference model of the IP block. So, we can also achieve 100-percent functional verification, too.

Formal coverage measurement

Modern formal MDV technology automatically measures assertion coverage and identifies verification holes in the RTL code. It then exports this data to an Accellera-compliant Unified Coverage Data Base (see Figure 1), where it can be combined with testbench metrics in a single report. Formal MDV technology generates Accellera-specified metrics such as assertion coverage, assertion result, cover property, statement coverage, and branch coverage — all with no assistance from simulation.

 

Figure 1: OneSpin’s Quantify MDV measures assertion coverage and builds a UCDB database

The combination of formal’s exhaustive verification and automatic coverage metric generation delivers much greater IP integration certainty than a slew of iterative dynamic test runs. Moreover, it can be deployed as soon as the first RTL sub-blocks become available, and long before a testbench (even a partial one) has been developed.

How does it work? What factors must be considered? Consider the IP block design-under-verification (DUV) in Figure 2:

 

 



Figure 2: Factors in the formal verification of an IP block

 

Before commencing verification, non-functional code must be excluded from the verification. Modern formal technology automatically identifies and excludes such code, including:

• Functional code excluded by the constraints (environmental conditions).
• Unreachable code: code that cannot be verified by any means, formal or otherwise.
• Redundant code: code that implements no DUV function.
• Verification code: code written to assist verification.

The formal MDV technology then analyzes the remaining “legal” functional code to identify and measure both verified and unverified RTL code — verification holes — to help us to target additional assertions precisely at those holes (see Figure 3). We can continue this process until 100 percent of the RTL has been 100-percent verified from both control and observation standpoints. And it requires no simulation whatsoever.

 

Figure 3: Formal MDV measures control and observation coverage to identify unverified RTL

 

Clearly, this application of ”quality” assertions — the right assertions in the right places — enhances coverage significantly more reliably and effectively than the assertion density approach.

Moreover, by automatically identifying code excluded by constraints, the technology considerably simplifies the task of detecting over-constraining. And it does so early in the verification, long before the integration stage, where the effects of over-constraining so often surface. Of course with over-constraining we still have the task of determining what the specific problem is. Are the constraints overly restrictive or just plain wrong? Or is the code redundant and can be removed? Clearly, it’s much quicker to arrive at a diagnosis without first having to simulate with cover and witness traces.

Using this formal MDV approach, we can say with confidence how thoroughly the IP block’s RTL description has been verified, the conditions under which it was it verified, and how accurately those conditions reflect the operating environment into which the IP block will be integrated. 

Has the specification been completely implemented

We can now analyze how thoroughly the RTL design has been verified, but how do we know that we’ve completely implemented the IP specification to begin with? In other words, how do we know that we’ve achieved 100-percent functional verification?

As previously noted, we can do this by constructing a set of operational assertions that expresses intended behavior — design intent — at the specification level. In other words, they operate at the level of (for example) reads, writes and data transfers, which is also the level of the verification plan. As the set is being built, formal technology analyzes all possible input scenarios with their associated output behaviors to identify discontinuities in the set, enabling us to fill the gaps with enhanced or additional assertions. When the set is gap-free and proven against the RTL, the specification has been implemented 100 percent. Figure 4 illustrates the gap-detection methodology.

Figure 4: Formal analysis identifies assertion set gaps

The task is to determine whether the DUV — an SDRAM controller — can perform an operation that is not yet covered by an operational assertion. In this case, the set of operational assertions consists of the single_read, single_write, idle (nop) and row_activation operations. The technology finds an input trace, starting after the single_read operation is complete, where none of the existing assertions match. It then generates a counterexample that indicates that there is no operational assertion to cover a burst_write request. We can now write an assertion or modify an existing assertion to fill the gap.

How do we create operational assertions?

The question is now: How do we create operational assertions? Assertion synthesis can’t do it; this approach works only with relatively-simple structural assertions at the FSM level in the RTL.

We can develop this set of end-to-end operational assertions — written in SVA — using the specification’s timing diagrams, which define cause-effect relationships, and a SystemVerilog library of “defines” and “functions” that simplifies the writing of operational assertions. There is thus a direct and unambiguous link between operational SVA and the IP’s specified functionality. An example of the use of this approach is shown in Figures 5 and 6.

 



Figure 5: Using a timing diagram to write operational assertions

 


The timing diagram describes the execution of a single read operation performed by an SDRAM controller. Cause and effect are represented by blue traces and red traces, respectively.

Cause: When the operation starts, the DUV is in some abstract state row activated; and read_request_i is high, while both write_request_i and burst_single_i are low. The upper bits of the address_i that denote the row do not change with respect to the previous operation, that is, the single_read refers to the row that has already been activated (hence the abstract state is row activated). The lower bits of address_i may change, but are labeled with “C” for further use. Three cycles after the single_read operation starts, the sdram_read_data_i input is labeled with “D” for further use.

Effect: Some output values are specified throughout the operation during t##1, …, t##5, for example cs_n_o, ras_n_o, which is captured in some function (sdram_read()). Other outputs such as read_data_o have a specified value only at certain time-points (read_data_o on is of interest iff ready_o==1’b1). When no value is specified for a given signal in the timing diagram, this signal can take arbitrary values. This is thus an abstraction from the initial timing diagram that represents a single run with all signals having specified values at any given time-point. Note that for the single read operation, the cause and the effect overlap, which is difficult to express in structural SVA. However, it is expressed easily with operational SVA by, for example, using t##0, t##1 etc., to refer to different time-points (see Figure 6).

 

Figure 6: Operational SVA expresses complex behavior both intuitively and succinctly.

 

Here you can see how the timing diagram in Figure 5 is translated into an operational assertion. It consists of two parts: the “cause” part that states the cause of the operation, and the “effect” part that states the expected DUV response.

The “t” construct models an abstract and flexible time point which can be either unconditional or triggered by signals. In addition to t##0 etc., there are other means to refer to time-points, such as during. Also, functions may be used to hide signal-level mappings from the operational assertions to ease readability and to enable easy re-use in other operational assertions. 

Is the IP specification correct and complete

We can now measure verification coverage and progress; and we can ensure that we’ve completely implemented the specification. But how do we know that the specification is complete and correct to begin with? No technology can analyze the written IP specification for errors and omissions. But errors, omissions and inconsistencies found during construction of the assertion set may indicate possible specification inconsistencies.

Ultimately, we ourselves must find errors, using whatever evidence is available. But it helps a great deal when an analytically-constructed assertion set flags the symptoms of such problems. Of course, anomalies in testbench results can also highlight such inconsistencies, but the non-exhaustive nature of dynamic verification — particularly the inability of its coverage metrics to adequately comprehend observation coverage — makes the process more hit-and-miss than the analytical approach of modern formal methods.

How about configurable IP blocks?

In our opening remarks, we noted the difficulty of verifying configurable IP. Verification of such blocks using simulation/testbenches requires the development of a different testbench for each possible (or, at least, each desired) configuration. However, because formal verification with operational assertions verifies design intent rather than implementation intent, it should be capable of verifying multiple IP configurations simultaneously in one verification run. Formal technology has begun to address this opportunity with functionality in relatively simple blocks, such as configurable bus widths, FIFO depths, and register counts. To apply it to more complex structures, the technology requires more development. Nonetheless, there is real potential for removing integration uncertainty for configurable IP in a highly productive manner.

Round-up

We can use modern formal MDV methodology to detect verification holes and determine how thoroughly the RTL has been verified. Using formal coverage analysis and metrics, we can ensure that 100 percent of the RTL has been verified with 100-percent control coverage and 100-percent observation coverage. We can deploy this approach as soon as the first RTL sub-blocks become available, and long before a testbench (even a partial one) has been developed. And, in any case, we can do the entire verification with no assistance from simulation whatsoever.

We use operational SVA, which is the appropriate level at which to express design intent and to verify the environmental conditions into which an IP block will be integrated. It is certainly the appropriate level for achieving convergence for verification sign-off before integration into the SoC. Using formal gap analysis to construct a set of end-to-end operational SVA, we can ensure that we’ve correctly implemented 100 percent of the specification in RTL, thus achieving 100-percent functional coverage. Moreover, in the process of constructing the set, we can uncover anomalies that indicate errors and omissions in the specification itself.

So, do we yet have a comprehensive methodology, capable of whole-block verification, both stand-alone and, where desired, in cooperation with dynamic verification? Does the methodology address all of the usual questions? Can we close the persistent IP verification quality gap? Do we now have the means to eliminate the primary sources of IP integration uncertainty? Yes!

References

1 Functional coverage metrics -- the next frontier by Leonard Drucker. 2002. EE Times. (http://www.eetimes.com/design/communications-design/4142861/Functional-coverage-metrics--the-next-frontier)

2. Assertion-Based Design by Foster, Harry D., Krolnik, Adam C., Lacey, David J. 2004. Springer 2004. (http://www.springer.com/engineering/circuits+%26+systems/book/978-1-4020-8027-2)

Author Biography

Dr. Raik Brinkmann is co-founder and vice president of software development at OneSpin Solutions. Prior to OneSpin, he played leading roles in verification engineering groups at Infineon Technologies; Siemens Corporate Technology (the Siemens think tank) in both Germany and the U.S.; and in the systems verification group at Siemens Public Communication Networks. He has a Diplom Informatiker (equivalent to a master's degree in computer science) from Clausthal Technical University, Germany and a Dr.-Ing. (equivalent to a Ph.D. degree) from the University of Kaiserslautern, Germany.

About Author

Comments

blog comments powered by Disqus