Three researchers from a Saudi Arabian university and one from an Indian university have presented a basic strategy they say helps in creating a more efficient framework for processor verification. Too many things are taken for granted during the verification process, and while many of the steps are based on commonsense, they certainly need to be documented properly, they said, making this the cornerstone of their paper.
At a paper presented at the recent VLSI Design and Test forum in India, Asheesh Shah, Abdulaziz Mazyad and Hamed Elsimary from King Saud University, Saudi Arabia, and Ashwani Ramani from Devi Ahilya Vishwavidhyalaya, Indore, said that though formal verification is growing in importance, its integration with existing methodologies such as simulation and other verification modules is not very clear and remain vendor specific.
They framework they propose is based on commonsense as much as it is on identifying and covering all aspects of the verification process.
"The sophistication of recent processor architectures requires major logic verification effort both in terms of time and manpower. This is become a major bottleneck in overall time-to-market of the final product. Verifying the processor requires thorough test plans, efficient simulation technology and a proper execution plan. Further, verification challenges are created due to cache coherency, memory management and other subtle architecture design and features which can be vendor specific. Beside verification of the design, it is also necessary to test the performance of the newly designed chip. Both these tasks require large man hours and millions of investment," they said.
Clearly, some common features all processors will need are multiple processor cores for each chip, superscalar and out of order execution, aggressive pre-fetching of instruction and data, speculative execution, multi-level cache, IEEE compliant floating point execution unit, multithreading for each chip and dynamic power management.
The strategy the researchers propose is general, work-in-progress and has its limitations. Simulation largely serves as the main engine of all verification flows and formal and semi-formal methods complement the simulation-linked verification process. But they are constrained by design complexity and so cannot be applied across the core design.
With rising processor complexity, verification will face the challenges of increased architectural state space, increased complexities, tool limitations and manpower training, coverage and analysis, less controllability and project management issues.
A common strategy that can be applied across the board is difficult but a proper verification strategy (plan) in place before the actual processor start is basic to successful verification. This strategy acts as a guideline in the form of a simple checklist or a more complex document highlighting the flow and tasks. The complexities and work behind any industrial level verification process is indeed very large. And the reason that progress cannot be tracked very easily within the vast majority of verification processes is because the loop between implementation and the verification plan are made in a time consuming, error-prone, manual process, often done ad-hoc with data coming from various sources. This demonstrates the absolute need for the automated storage and analysis of verification data within the verification process, a robust strategy and a framework which can reduce bottlenecks if not guarantee complete cure.
"Verification projects can be managed and brought under control provided that the right combination of planning, data collection and measurement techniques are used - which calls for good project management techniques. Our strategy is nothing but a set of questions, goals and objectives along with some basic pre-requisities that we consider is necessarybefore any verification task starts. This is an essential and integral part of the process which helps in better decision-making and its importance will be clearly reflected in the outcome of the verification process. Many of these issues are taken for granted and can be termed as commonsense, but they still need to be documented," they said.
These are a proper choice of the tools (external or in-house) and their evaluation, keeping in mind reliability, scalability and time factors; identifying key tasks and highlighting dependencies to reduce the time the process takes; multiple models of each design unit at various abstraction levels; interface drivers and checkers in C/C++ monitors and debug tools; hand-generated test cases based on prior experience and, a team of verification engineers with good inter-personal communication skills.
Set of questions
Questions to be asked before starting the actual process are about defining the aim of the process in terms of the system, chip, multiple unit, unit and block levels, the size of the unit/block level design, distinguishing between control logic and data logic, identifying and demarcating of related and dependent control/data units, identifying bus architecture, large memory blocks and cache, identifying large architectural units fir for partitioning for the verification process, identifying basic, key, new and problematic architectural units with properties to be verified, identifying protocols that need checking for all control logic, bus and other functional blocks, creating a list of functions and attributes for which the completeness of testing is of particular concern, number and size of team for each verification task with time frames, grouping related or dependent teams. Some tasks done before the RTL task are useful and include compliance testing, doping RTL coding with assertions and raising abstraction level where possible for a quick pre-RTL verification.
The entire verification process needs to be divided into tasks such as creating a proper software environment, functional verification, transistor and latch level equivalence, timings, test generation, system simulation, coverage analysis, performance validation, bug analysis. Some of these tasks are dependent and cannot be started without others.
The strategy is incomplete without considering test generation methodologies, data collection for coverage and analysis, and management issues such as event log and debug. Picking the right tool is one of the main tasks of the whole verification procedure.
The challenges are many and one is to integrate formal verification tool to simulation and others verification modules. This is best addressed by criteria such as the time needed to set up the environment to its final results, risk-reward ratio, size, coverage etc. Coverage data collection and analysis is another matter of concern.
"There are also a host of test generation methodologies ranging from direct, random and constraint driven technologies. A proper log of event and bugs reported will be helpful in the overall and future verification strategy and needs to be shared among the various teams. A detailed bug report, review and analysis report will be useful in reducing the time and manpower effort besides improving future design work. There are many other vital tools that will be required, but are beyond the scope of this paper. Some specialized methods reported for functional verification in would require relevant tool support. Co-simulation is also widely used to reduce time in many applications. Most tools used are in-house, especially for large and commercial companies like IBM, Intel and some others. However, a host of tools are also available from vendors. One can also gain from the work of universities and research and can use them with due diligence," the researchers pointed out.
One of the main challenges before the processor verification team is to come out with a pre-RTL, fast functional verification. Attempts are being made to raise the level of abstraction at the system level with further interest in ESL.
Another factor that can reduce the verification time considerably is by looking at the compliance problem. The notion of a framework for a successful verification plan and strategy stems from these pre-RTL attempts which can help in considerable time savings. However, there is ample space to expand the scope of framework to cover both in the realm of pre-RTL and post-RTL verification process. As the abstraction level is increased further there will be a greater need for compliance checking. A majority of verification bugs arise from wrong specification, communication problems and ambiguity, they said.
"With coverage data and analysis in the post-RTL scenario is a continuous area of development there are a slew of tools and techniques both in-house and external to large industry houses engaged in processor verification. Therefore, to reduce time-to-market the focus has to be on the integration of these tools and earlier reporting of failures. Our work for a framework is directed towards such considerations which can be helpful for reducing the overall verification time-frame."
The paper was based on a project to build a good verification plan covering and integrating the tools, techniques and methodologies. It puts in perspective a strategy than can be further built upon and is based on the belief that a strategy and a framework will benefit everyone engaged in verification.
The next level of higher efficiency in verification will come from the level of abstraction becoming higher and another from greater focus on compliance and even better coverage, with the latter offering adequate scope for research communities to engage in, they concluded.
1. Viewpoint: Formal verification with constraints - it doesn't have to be like tightrope walking
2.Verification is alive and well at SoC virtual conference
3. Freescale, Synopsys broaden cooperation to cut IC verification cost