The yin and yang of chip design

February 20, 2012

In thinking about this viewpoint, it occurred to me that a good place to start is with the EDA industry itself –– what characterizes it, for example?  It strikes me that in EDA we are quite different as an industry compared to, say, the medical industry, especially in terms of speed of innovation. If a person in academia dreams up an EDA idea, it can be implemented, tested to ensure that it works, and put on the market reasonably quickly. 

However, the proof needed to demonstrate that an EDA innovation works reliably is somewhat less rigorous than the testing and approvals process that medicine and medical technology needs to go through –– and with good reason. This is chiefly because the medical industry deals with people, which make the consequences of something going wrong far more serious. 

In this way, chip design as an industry is more “fault-tolerant.” The rigor in medicine is, by definition, an enemy of progress –– if you don’t take risks, then you are not moving forward as fast. In the EDA industry, we can accept more risk, and can innovate much faster.

But, we still need to mitigate risk. A major issue is in the capricious nature of the synthesis algorithms that we use in the EDA flow. Due to the design scale, logic synthesis, placement and routing tools necessarily need to work with rather inaccurate models of reality. Crosstalk delay, for instance, can cause the estimated wire delay to deviate significantly from the mask wire delay. As a result, the outcome can contain various mistakes or inefficiencies.  

The risks that we take during the synthesis part of a chip design flow are always offset by the safety net of the sign-off phase. Sign-off tools act as independent auditors, the part that mops up any issues that fall between the cracks.  Rigorous analysis allows us to find any potential errors from risks we took earlier and, in this way, is a key part of bringing us closer to the edge.

And, we need to be preciously close to the edge in order to squeeze maximum performance out of every square millimeter of silicon. The safety net of our sign-off analysis tools comes at a price of significant compute resources.

The good news is that recent innovations bring some relief. For instance, new parallel Static Timing Analysis (STA) tools promise to significantly accelerate the verification loop.  It’s the silicon implementation that is thought of as the boundary-pushing part of EDA, but finding a solution that actually works in practice relies on robust analysis.

The two classes of tools –– synthesis and sign-off analysis –– are different entities. I often say that these two worlds of EDA are like the famous self-help book: Synthesis is from Mars, Analysis is from Venus. It’s a type of yin and yang philosophy to make this marriage work.

Moreover, there are some key EDA business aspects to this yin and yang. Synthesis tools are fragile, especially when pushed to their limits. They are not stable by themselves and often need help from designers to get best results. With analysis tools, less support and designer input is required. From a vendor perspective, therefore, the support burden is typically much greater for synthesis tools, creating some fundamental implications for profitability.

Another facet of this situation is that EDA vendors are more likely to get requests for additional licenses for analysis tools than for synthesis tools because analysis tools run for a much longer time.

With synthesis tools, the utilization is fast, so the pressure to sell more licenses is fairly low. As a consequence, the sales thrust of the EDA business is skewed much more toward analysis tools because these tools keep the computer busy.

It is for this reason that it’s difficult to launch a high-level synthesis startup –– the users are there but the usage simply isn’t. With analysis tools, the opposite is true –– design teams always need more because they hold up progress at critical moments. 

One of the fundamental dynamics of the EDA ecosystem is that synthesis tools in general don’t make the money, but the industry can’t live without them. Indeed, their swift run-time can make them seem less important than they really are.

It seems to me that the market has an inherent misunderstanding of the role of these tools and how they interact.  It’s also fair to say that this misconception about the true “portfolio mix” nature of the EDA flow has translated into a revenue recognition problem. The analysis part of the business will always claim that it is generating more money, but only in the context of a complete EDA flow. 

For vendors, the only way out is to differentiate the synthesis capability and to position it in terms of quality and stability. Designers are willing to pay an infinite amount of money for a tool that delivers 20% better performance, straight out of the box, than anyone else’s.  The quality of an analysis tool is much easier to measure –– if it’s 3X faster, it saves 3X time.  

Tools should always be considered in the context of a flow, and it always comes down to the old adage, “it’s the flow, stupid.” It’s not so much the quality of the individual components, but how they work together –– and the way in which things are changed –– that determines the success.

I think this constant evolution and ever-changing balance keeps our industry very much alive and interesting.  Don’t you agree?

- Patrick Groeneveld is the chief technology officer at Magma Design Automation Inc.

Article Courtesy: EE Times

Comments

blog comments powered by Disqus
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT