How to achieve quality assurance for your electronic designs

by Clive Maxfield , TechOnline India - April 06, 2011

How do we ensure the quality of all aspects of an electronic design, including hardware (digital, analog, mixed-signal...) and software (boot code, test routines, firmware, drivers...) for anything from FPGAs and SoCs to full-blown embedded systems?

It’s no secret that electronic designs are becoming ever more complex. I used to think things were hard enough back in 1980 when I was designing my first ASIC as a gate-level schematic using pencil and paper. Looking

back, however, I realize life was a doddle and I had things easy – all I had to worry about was making sure the logic was functionally correct and would fit in the device (a gate array containing 2,000 equivalent gates) and that the timing was OK, which wasn’t particularly taxing since our system clock was sub-1MHz and we had lots of slack to play with.

We didn’t even think about things like leakage power and dynamic power consumption. Now, of course, we’re talking about designs containing millions upon millions of logic gates, including humungous blocks of third-party IP, more processor cores and hardware accelerators than you can swing a stick at, with millions of lines of software thrown into the mix.

So how do we ensure the quality of all aspects of an electronic design, including hardware (digital, analog, mixed-signal...) and software (boot code, test routines, firmware, drivers...) for anything from FPGAs and
SoCs to full-blown embedded systems?

Obviously we have a plethora of verification and analysis tools available to us, ranging from formal verification to software simulation and emulation, but are these enough? I’m thinking back to the infamous Pentium FDIV bug from 1993, which ended up costing Intel somewhere in the region of half a billion dollars. My recollection is that the cause of this bug was that someone neglected to load some entries from a text file into a look-up table (I’m glad I wasn’t that engineer).

The problem is that you can’t test every conceivable occurrence using conventional verification technology. The Pentium FDIV bug was subtle in that only a small proportion of floating point division operations performed with these processors would produce incorrect results, with the result that the bug wasn’t picked up by whatever verification techniques the folks at Intel were using at the time.

How are things like this tracked? Well, even today, there’s an inordinate use of checklists and spreadsheets and tools and techniques of this ilk. But engineers typically begrudge the time required to keep these things up to date, and managers spend a lot of time compiling status reports and suchlike. Surely there has to be a better way…

A traditional design environment/flow

Let’s start by considering a traditional design environment and flow. First of all we are going to have a bunch of specification documents, which may be presented in a variety of formats, including Word files, text files, Excel Spreadsheets, PDFs, PowerPoint presentations, and so forth as illustrated in Figure 1.

  

                               

                               Figure 1. High-level view of traditional design environment/flow.

 

In addition to the specifications themselves (maximum possible power consumption, maximum possible area…), these documents may also include things like best practices, things to watch out for (“Don’t use IP block XYZ version x.x with synthesis tool ABC version y.y because we ran into problems with this before”), and things to remember to do (“Note to self: Don’t forget to load all of the FDIV look-up table into the processor before building 10 million chips,” for example).

Based on these specifications, we will decide on an architecture, purchase some third-party IP, re-use some internal IP from previous projects, and capture some new functional blocks in the form of RTL. (Obviously this is a gross simplification because we are ignoring things like virtual prototypes and transaction-level models and so forth, but this high-level view will serve our purposes here.)

Next, we will process and analyze our design using a variety of tools, including synthesis, simulation, formal verification, and so forth. In the case of large designs, these tools may generate megabytes, gigabytes, sometimes terabytes of data in the form of log files and reports each day, so the problem will be to wade through all of this data to locate the nuggets of critical information we need to determine if everything is as it should be.

So, once again, we come back to our original question, which is how can we ensure the quality of our design. How can we capture all of the things we know that we need to do in some form or other, and how can we
then make sure that we do all of the things we need to do?

Probes and Formulas and Dashboards, Oh my!

And so we come to VIP Lane, which was created by the folks at Satin Technologies. VIP Lane
is really cool technology that will better ensure the quality of just about any facet of electronic design. In fact, the reason founders of Satin gave the company its name was in order to emphasize how using their technology makes our lives so silky smooooth.

Seriously, this technology is applicable to hardware (digital, analog, mixed-signal...) and software (boot code, test routines, firmware, drivers...) for anything from FPGAs and SoCs to full-blown embedded systems. One really important point is that deploying VIP Lane does not change your existing design and verification flow in any way.

Now, the tricky part for me will be to explain this is such a way that you realize just how clever yet easy-to-use this all is. Since I’m a hardware guy by trade, we’ll come at things from that direction – but make sure you remember that this technology is applicable to anything you can measure that generates machine-readable results.

  

                              

 Figure 2. VIP Lane – Probes and Formulas and Dashboards.

 

The first step is to create your probes (also known as sensors). These are little functions that are designed to go into documents or databases and locate and extract the required information in the form of a text string, or a numerical value, or a table, or whatever. These probes may be used to access data from the original specifications, or from the hardware / software design files, or from the log files and databases generated by the design tools, or… the list goes on. The folks at Satin Technologies have already created hundreds of these probes  targeted at the design tools and flows from the major EDA vendors, and you can use these probes as templates or starting points for creating your own.

As a simple example, let’s assume that we are creating an SoC design. Somewhere in the specification will be the maximum permitted power consumption for the chip. Let’s call this PMAX. We could create a probe that goes into the appropriate specification and accesses this value. Similarly, we could create probes that access the leakage power (PLEAK) for the chip from the synthesis log files and the dynamic power consumption (PDYNAMIC) from the simulation log files.

Next we could create a formula that accesses these probe values and uses them to generate the total power consumed by the chip: for example, PTOTAL = PLEAK + PDYNAMIC. And, of course we can compare our PTOTAL value to the original PMAX as defined in the specification, and use this to inform the designers and managers whether or not we have achieved our goal.

And how do we present this data to the designers and managers? Well, one way is by means of configurable dashboards as illustrated in Figure 3 (click on this image to see a larger, more detailed version). 

 

                              

                         Figure 3. VIP Lane presents results in the form of configurable dashboards.

 

Each entry in the dashboard corresponds to a particular quality check. A green background indicates a pass; a red background indicates a fail; and other colors indicate different states, such as “Not yet tested.” Moving your mouse cursor over a particular element in the dashboard allows you to access and view the formulas and probe values associated with that element, along with comments explaining what is required and so forth.

One point of interest is that different dashboards can be created for different users. The designer of an RTL block may wish to see a dashboard that corresponds only to that block, for example. A team leader may wish to see a dashboard corresponding to a group of blocks. And the project manager may wish to see a dashboard that summarizes the data from all of the other dashboards.

As I mentioned earlier, the concepts underlying VIP Lane are deceptively simple – it’s the way in which the folks at Satin Technologies have implemented the solution that makes VIP Lane so powerful. This technology enables engineering teams to move away from using manual, tedious, and unreliable design checklists; to save weeks with every design by automatically generating dashboards and quality reports; and to achieve on-the-fly quality monitoring with no overhead to the design teams.

So, as we’ve see, the underlying concept behind VIP Lane is so simple that even a manager can understand it (grin). Of course the actually implementation is non-trivial, but everything is easy-to-use and intuitive from the users’ point of view. The end result is to ensure the quality of your design – and who can put a price on that?


 

About Author

Comments

blog comments powered by Disqus