Functional coverage methodology

by Ravindra Bidnur, Romeshkumar Mehta, LSI India, Bangalore. , TechOnline India - October 01, 2010

There is an underlying need to understand the basic philosophy that defines the functional coverage and achieving the coverage goals. This is what ultimately leads to achieving the final goal of completing the verification with confidence and quality.


The complexity of designs and their verification environments in the VLSI/ASIC domain has moved the industry from a predominantly directed test methodology to more automated techniques (such as constrained random simulation). Such new methodologies reduce the number of tests (set of operational sequences) but could result in more test cycles as each test may be run with many randomly generated scenarios.

This leads to two new verification challenges. These are the definition of adequate coverage measurements and the simulation time in terms of the number of test cycles. Thus, it is now easier to create tests but more effort is required to qualify and quantify the verification achieved through these tests.

This new approach is gaining good traction in the industry and various tools and methodologies are surfacing to embrace this. However, there is an underlying need to understand the basic philosophy that defines the functional coverage and achieving the coverage goals. This is what ultimately leads to achieving the final goal of completing the verification with confidence and quality.

In this white paper, an attempt is made to capture the basics of functional coverage development flow and is mainly focused on the initial planning phase so that it will help this development in the right direction. At the end, we share a few tips for coverage implementation and also on coverage closure.

Introduction to functional coverage

Functional coverage is a set of metrics defined to measure the effectiveness of the verification test suite developed (directed or random) to certify completeness of the design verification process. Apart from these, it will enhance the verification process by helping us to identify the simulation cycles that do not increase the coverage and also help in identifying test holes.

Functional Coverage Activity

As functional coverage involves considerable effort and time, it needs to be planned appropriately. In the following paragraph the nature of this activity is explained.

The above figure depicts the spread of functional coverage development and closure activity. The initial phase follows with the design cycle and latter part follows with the verification which is predominately focused on verification closure.

Functional coverage involves predominately three phases - namely Planning, Development and Validation and Closure. Planning starts with the design specification document as input and preparing detailed functional coverage plan (details in the next section of this paper). Coverage development is based on the design implementation as a prime input. In this phase, there is a possibility of refining the plan based on the actual design implementation. During the validation phase, the developed code sanity verification is done for correctness with the limited tests available. The last stage is coverage closure which mainly involves running the simulation using the design and available verification tests and gathering the coverage results. This phase mainly involves analyzing the stimulus required to achieve the defined coverage and improving coverage. Based on the analysis, the test scenarios may have to be fine- tuned to get the required coverage hit. This cycle is iterative until we achieve the required coverage goal.

In summary, functional coverage activity spans across the design and verification phase and is connected to the respective flow very tightly.

Functional Coverage Definition (Planning)

Here, we focus on how functional coverage planning is done based on the design specifications. As this is going to be used along with verification tests, one needs to ensure that no redundancy is added in the process, so that the end goal is achieved without any compromise.

Functional coverage can be broadly addressed at two levels: Architecture level coverage involving coverage of major functional aspects of the specifications and, Implementation level coverage targeting features which are more or less dependent of how RTL is implemented. The planning process requires identifying all these aspects in terms of functional coverage points.

Thus, functional coverage point identification is a process to analyze the various possible input conditions and capturing them as appropriate items in the functional coverage plan. In this process one needs to focus on identifying the functional properties of the design, basic logic elements and related properties with possible input variations. To explain the process, let us take a few examples.

Click on image to enlarge.

The above example shows a typical combinational logic whose output is derived from ORing of 4 input signals and few bits of control register. Thus it contains 3 distinct logical elements. Let us identify functional coverage items for this logic.

1. ORing Logic – It is important to consider one input at a time for OR gate (very similar to the methodology followed for stuck faults during ATPG). Covering an ORing condition thus requires one-hot combinations of inputs on OR gate alongwith all-Zeros. Hence coverage points for OR gate will be:

a. cover 1 on each input while others are 0 – This is to make sure that we do not get a false hit due to multiple inputs being 1 masking probable bugs.

b. cover 0 on all inputs.

Similarly, for an ANDing condition, one needs to cover zero-hot and all ones condition for complete coverage.

{pagebreak}Coverage defined in such a way is a structural coverage and thus implementation dependent. Also, such coverage may sometimes be achieved through code coverage analysis. However, depending upon the logic, a decision must be taken to use functional coverage or not.

1. Register Logic – Usually control registers contain multiple bits and the bits are related. Hence it is very important to cover all possible combinations. Let us assume the following Truth Table for the given example.

As only the functionally valid states are specified – there are bound to be questions as to what to be done about the remaining states. If the number of possible combinations are not too large, as in this case where there are only 16 possible combinations, one should cover for all values including the conditions for don’t care as they are valid input combinations. Another important point is that one need not worry about checking the outcome for each of the input combination as it is outside the goal of coverage measurement. However, the same outputs may serve as the inputs to next stage and so are to be considered for all possible combinations for next stage logic.

2. Many a time, it is also important to consider some functionality as temporal functionality. One example can be data fetch from non-cacheable areas. In such cases, it is required that one covers that data from non-cacheable area is fetched followed by same data requested again. The second event in this case works to make sure that test is complete to verify the required functionality.

3. For the above, each of the register bit combinations need to be covered with enable and disable conditions.

The above example gives a basic idea about how to create list of coverage items based on the design specification and implementation. Thus, we recommend that while planning for functional coverage, the following guidelines be considered:

  1. Identify areas of interest which can be covered by other more automated means like code and toggle coverage and do not make them part of functional coverage for implementation. This can save a lot of effort when developing functional coverage.
  2. Focus on inputs/stimuli only and do not consider results of a stimuli (though results of stimuli may well become input/stimuli to another functional logic and then considered for it).
  3. Consider all possible/valid (including don’t care states) combinations of inputs. If number of combinations in certain case is large, one needs to take a judgmental call. Also, make sure that the invalid combinations are considered to cause “Alarms” using constructs like illegal_bins.
  4. Stress should be on events which are related but do not necessarily happen at the same cycle. These cannot be covered by code coverage.
  5. Temporal events must be considered.
  6. Coverage should be inclusive of all gating conditions (like in ANDing case).
  7. For detailed implementation features, make sure that if functional coverage requires to be used, the structural coverage approach similar to ATPG is used as shown in above example.
  8. In certain cases, functionality is implementation dependent and hence should be covered accordingly. A common example being the FIFO Threshold crossing. Such things should be planned to cover at the later stage of the project when RTL is more stable.
  9. Ideally, Reset conditions should normally be part of checkers and not functional coverage items. However, make sure that at a highest level, coverage for reset event is included with scenarios like sudden reset events during normal operation.
  10. It is very important to document the coverage features in a suitable form which will help in many ways during the development flow. The coverage plan will give a good cross reference with design features and verification test. In Appendix A, a template is shown which was developed using XLS.

Functional Coverage Development (Implementation and Validation)


Once the coverage items are planned, the coverage code development can be performed independent of the RTL (in initial stages) using locally defined signals in coverage codes. Once the RTL is developed, the signals used locally for coverage can be mapped to appropriate signals in RTL. This methodology can thus save a lot of time in coverage implementation process. We recommend the following guidelines to be considered for functional coverage development.

  1. Try to implement coverage code as independent of RTL.
  2. For coverage implementation, use locally defined signals which can be later mapped to RTL.
  3. Use cover property and cover groups/points as appropriate (Property for Control logic coverage and cover group/points for data logic coverage in general.
  4. FSM may be covered by using both cover groups (for states coverage) and properties (for transitions).
  5. Use gating event for cover group apart from clock. This will reduce the simulation overhead.
Another possible and efficient way to develop functional coverage ahead of RTL is by using verification elements like scoreboards and transactors. Verification environment development goes in parallel with RTL development. If functional coverage is also developed simultaneously leveraging the verification components, it can save much time.


Another important aspect of functional coverage implementation is validation of coverage code itself. As coverage is only stimulus dependent, this can be achieved even before completion of verification. Although validation of entire code is imperative, we propose that during implementation period, only a few aspects of a particular coverage code can be validated as the entire stimulus may not be available. The remaining part of validation can be taken as part of the next phase, Functional coverage closure. The validation mainly involves running tests to make sure that certain properties/coverpoints of the feature being validated are correctly evaluated. This is usually a manual process done using simulation. This process will help the user to identify incorrectly written, usually mistimed properties, incorrect signal mapping to RTL, use of illegal and inaccessible bins and the like. The only possible way to have a high quality of coverage code is by means of validating it with reviews and eye-ball verification using interactive simulations. The validation done in such a manner can reduce the efforts at the later stage required to in analyzing drop in coverage due to un-validated features.

{pagebreak}Functional Coverage Closure

This activity is a three step process which usually runs in a loop till targeted coverage is achieved. Each of these steps is described below.

  1. Coverage data generation: This is the first step and involves simulating the design with the developed test suites and generating coverage data. It is imperative to define an effective strategy for this step as the analysis depends on what data is generated. There are two important points to consider here:
  2. a. Coverage variation – During this period designs are usually not completely frozen and changes are inevitable in design as well as in the verification environment and coverage code. However, test randomization may mean variation in coverage from time to time. This can lead to confusion and additional effort in resolving issues. A possible solution is to keep merging coverage data over a small number of runs to give effective but not stale data for analysis. Only the passing tests may be considered and large number of seeds would effectively minimize variation between coverage runs.

    b. Computing resource – Issues related to performance and availability of computing resources can be a concern for large designs like a processor IP. In such a case, coverage data generated can be of huge volumes and need to be accordingly generated using right types of resources. This needs to be planned usually to avoid schedule interruptions and conflicts.

  3. Coverage data analysis: This step involves manually identifying areas of low coverage and deriving the root causes of the same. The following points may be considered to make this process effective:
  4. c. Have a pre-defined goal for coverage numbers for each coverage item. Though, a complete goal is ideal such a possibility is unrealistic in the case of large and complex designs. So, a very high but feasible coverage goal must be set to streamline the focus of verification.

    d. If your coverage plan is defined hierarchically as per logic sections, it makes sense to assign weightage to each logic section. For example, address decoder logic and ALU coverage defined at the same level in overall processor coverage plan. If we give the same weight to both, then resultant coverage at processor level can be skewed due to unequal complexity of decoder compared to ALU.

    e. Agree with design and verification team on waivers of scenarios/coverage holes. This process can become contentious issue if verification and coverage teams are not the same. The analysis can lead to conflicts which may best be avoided as it can hamper the verification progress. By setting the clear expectations during the coverage planning phase this can be avoided.

    f. Track progress and decisions made. Here the coverage closure template plays a major role. The present tool versions are not so compatible and help in coverage number tracking for various releases. Some effort put exporting the data and tracking in XLS spread-sheet will help the project. A sample templates used in our projects are given in Appendix A and Appendix B which helped us in tracking the coverage activity in all the phases as mentioned above.

    g. Some of the issues can be avoided by thorough review of coverage plan at planning stage.

  5. Test suite/coverage code refinements: This is the step where corrective action is performed based on root cause analysis of coverage gaps. The action may be required on either test suite improvement or coverage code refinement. It may also be required here to change the weights of certain feature to remove bias from coverage data. Also, refinements may be required in coverage generation process itself like removing certain features or parts of it. Every test code/environment improvement at this stage will required to be followed by coverage code validation. Similarly, RTL bugs found later in the verification stage must be also checked for escapes against coverage code to improve the functional coverage itself.

The above described process flow and guidelines in various sections helps in reducing the effort by keeping the focus and avoiding rework. It also assures that a high quality of verification is achieved by defining the coverage items and goals correctly.

About the Authors:

Based in Bangalore, Ravindra Bidnur is Senior Engineering Manager - ASIC DvDs and can be reached at while Romeshkumar Mehta is in Engineering Staff- ASIC DvDs and can be reached at

Appendix A

Coverage plan tracking template (based on Spread-sheet using XLS)
Click on image to enlarge.

Tracking Graph
Click on image to enlarge.

Appendix B

Coverage closure template using the XLS spread sheet.

Feature Tracking with Priority for a Release
(This sheet is derived from the detailed status)

Click on image to enlarge.

Detailed Coverage numbers for a release:
(data is imported into XL-worksheet with Import)


blog comments powered by Disqus