Achieve your SoC Design Goals – Measure Twice, Cut Once!

by Dr. Anis Uzzaman and Kenneth Chang, Cadence Design Systems , TechOnline India - May 25, 2011

This paper describes how – by means of today’s state-of-the-art chip estimation, evaluation, and implementation tools and technologies – it is possible to essentially ‘Measure Twice’ and ‘Cut Once’ to fully realize your design goals.

Every new SoC design starts with “The Idea” (for the purposes of this paper we will take the term SoC to embrace ASICs and ASSPs). In some cases someone essentially says “We need to create a new device that

does this, that, and the other; an incredible design that will be far better than anything else out there; and one that will bring us a lot of money while making our competitors rue the day that they didn’t think of this first!”

Of course, not all new chips are of the “wiz-bang let’s change-the-world” variety. Many are customer driven derivatives of something that already exists, with very tight market and cost windows, but these designs also

start life as an idea in someone’s head.

This original idea evolves and gets captured into a specification, which is subsequently converted into a high-level architecture. In turn, the architecture is fleshed-out into a collection of pre-existing intellectual property (IP) blocks and new functional blocks (the pre-existing blocks may have been internally generated and/or supplied by third-party vendors).  The functionality of these blocks is captured using an appropriate level of representation, such as RTL. Following synthesis, simulation, place-and-route, etc., the design ultimately finds its physical realization in the form of silicon.

The process of taking a SoC design from the original specification to the final product may sound trivial to the uninitiated.  In reality, SoCs are highly complex devices and the process of productizing one provides many opportunities for error. An error will often times translate into higher costs in terms of time and money to affect the company’s bottom line.  For example, a functional error that is not detected until the return of silicon can translate into a re-spin causing the device to miss its market window.  As the common industry saying goes: “There’s no such thing as ‘second place’ in this business!”

Another possibility is for a device to be functionally correct, but for it to occupy too much silicon real estate or burn more power than originally estimated.  This may result in advanced cooling and packaging requirements that price the device out of the market.  In all of these scenarios, there are costs associated with time, materials, and market share.

Fortunately, steps can be taken to mitigate risks throughout the SoC development flow. The well-known carpenter’s adage “Measure twice, cut once,” means that it is faster to double-check something than it is to make a mistake. This is particularly relevant with regard to SoC development.

If the carpenter cuts the wood improperly the piece is ruined; if the system architect selects the wrong IP the chip fails to meet its design goals. Thus, it is imperative that any assumptions associated with the SoC’s implementation are solid.  Alternative architectures and implementation scenarios should be evaluated up-front.  Furthermore, throughout the implementation phase, these assumptions along with the design requirements should be tracked relative to the actual implementation.

This paper describes how – by means of today’s state-of-the-art chip estimation, evaluation, and implementation tools and technologies – it is possible to essentially ‘Measure Twice’ and ‘Cut Once’ to fully
realize your design goals.

Traditional SoC design environments

In traditional SoC design environments, early chip estimations are accomplished in a variety of ways.  Some teams use spreadsheets; others may leverage pen and paper; and, believe it or not, some teams continue to “wing it” and do nothing at all. In reality, this latter group is a dying breed.  As chip designs increase in size and

complexity, larger gate counts, higher performance needs, and lower power requirements mean that “winging it” is an approach that is doomed for failure.

The bottom line is that the days of using ad-hoc methods to analyze the feasibility of a potential SoC are over. More tangible methods that provide consistent, quantified, and qualified data upon which teams can base their decisions are becoming more attractive, because no one wants to make a costly mistake that requires ‘Cutting’ more than once.

Furthermore, it’s not sufficient to empower businesses to simply make smart decisions; it’s also necessary to be able to make these smart decisions in a timely manner (there’s no point in finally arriving at the optimum architecture after your competitors have already presented their production offerings to the market).

In fact, real-world customer data shows that using spreadsheets or pen-and-paper methods to perform die area and power estimation, takes easily a minimum of one to two weeks.  And this assumes that things like characterized technology data has already been prepared.

In a traditional design environment, such data is typically prepared by hand and is not well-maintained for collaboration between multiple teams. One reason for this is that spreadsheets do not provide enforcement capabilities. Without the ability to enforce “Administrator-like” privileges, changes to the source data can be made at any time by anybody. This often results in issues where inconsistencies can easily be introduced, thereby increasing the probability of having to make multiple ‘Cuts.’

Consider the case where different system architects create their own spreadsheets. Even if these architects are using exactly the same data in the form of the standard cell, I/O, and memory datasheets, they may interpret the available data in different ways (actually, ensuring that they are using the same data can be problematical). In this case, their final results will almost certainly differ, leading to erroneous assumptions and decisions. The bottom line is there should be a way to automate these tasks in such a way as to provide a more collaborative

environment that provides consistent, quantified, and qualified data on which everyone can base their decisions. Without this, the result may be a horrendous “Measure multiple times, get different results, and – based on these results – cut multiple times,” scenario.

In today’s SoC design environments, the core elements of the design flow are as illustrated in Figure 1. The heart of the flow commences with activities like design capture using RTL coding coupled with synthesis and simulation. Floorplanning activities may also commence at the beginning of the flow and the floorplan may be refined throughout the flow as more accurate data becomes available. And then there are downstream implementation-level activities like place-and-route and optimization. (We will omit the various verification, analysis, and signoff tasks for the sake of simplicity.)

 

                            

                                 Figure 1. Key elements in a traditional SoC design flow.

 

Lacking in the majority of design environments is the ability to consistently and deterministically evaluate alternative architectures and different implementation scenarios early in the process before committing ourselves to something that is doomed to failure.

What if the design ends up larger (occupying more silicon area) than expected? What if it ends up consuming more power? What will be the implications in terms of more advanced packaging technologies, heat sinks, cooling fans, and cost overruns? And what if “Measuring once, cutting twice (or thrice)” causes the product to miss its market window?

In fact, there’s an even more embarrassing scenario. What if your design comes out on time exactly as planned … but a competitor releases a device with identical (or better) characteristics in terms of more functionality, lower power consumption, and higher performance, all at a lower price? How could this be possible if your crack design team has done the best that it can do? This latter scenario occurs more often than you might expect. The reason is almost certainly the fact that the competitor performed their upfront due diligence so that they could better plan their get-to-market strategy.

The real question is: “If you could improve your design process in order to become more competitive as a company, would you do so?”  When some users hear this question, they say “We really don’t have the time to consider alternatives because we’re far too busy.”

We know that you are busy. But we also know that our solution will enable you to do things you don’t currently have the time to do. So here’s the challenge – *If* we could increase your productivity, reduce your project risks, reduce your time-to-market, increase the quality of your final product, and generally provide you with the means to “Measure Twice and Cut Once,” would you be interested? If so, read on… 

A next-generation design environment

Decisions made early in the design cycle have the most impact with regard to all aspects of the final SoC, including cost, power, performance, size (of the silicon die), and so forth. It is generally accepted that 80% of a product's cost is determined during the first 20% of that product's development cycle.* (*Gary Smith, garysmitheda.com)

This means that, as early as possible in the development process, it is necessary to select the optimum architecture and silicon process that can realize all of the design’s goals for the least cost. In turn, this involves being able to accurately estimate the area, power consumption, and cost associated with different IP blocks implemented in different technology nodes. For example, does a block of memory IP from Vendor A use less power than an equivalent block from Vendor B while still achieving the required performance? Will implementing the design at the 65 nm technology node from Foundry C achieve the desired power and
performance goals, or will it be necessary to move to the 40 nm node?  And, if so, would it be more cost-effective to use a 40 nm process from Foundry D?

There are several key components when it comes to accurately estimating and planning chip costs, area, power, and performance characteristics. In addition to the architectural specification itself, it is also necessary to have access to data pertaining to Foundry IP, External IP, and Internal IP. What is required is a Chip Planning System that can take in all these inputs and – based on the architectural specification – produce a chip estimation with detailed reports to help with the decision-making process.

 

                             

                                        Figure 2. High-level view of a chip planning system.

 

Using such a system, it should be possible to rapidly carry out “what-if” analysis of alternative architectures and implementation scenarios. The ability to hone in on an optimal architecture that employs the most cost-effective IP and silicon process while still achieving area, power, and performance goals is worth its weight in gold.

Equally valuable is the ability to determine early in the process whether creating your new chip is a good idea… or not. It’s much better to find this sort of thing up-front before you’ve committed substantial amounts of time and resources, as opposed to discovering that you are doomed to failure downstream in the process after you’ve coded the RTL and invested in IP that you will never actually get to use.

The future is closer than you think

In fact, a next-generation SoC planning system is already available today – the Cadence Chip Planning System (CCPS) in conjunction with the Cadence IP Catalog Management System (IPMS) and the ChipEstimate.com IP Ecosystem as illustrated in Figure 3, below. A brief summary of these components is as follows:

The Cadence Chip Planning System (CCPS):  This early chip estimation product line is used at the architectural, pre-RTL phase to provide users with the ability to perform fast and accurate chip estimations. These estimations include die area, power consumption, performance, and cost, which can be used to drive key business decisions. Using this technology allows users to make smarter decisions early, before committing to projects, thereby lowering project risk and speeding time-to-market.

The Cadence IP Catalog Management System (IPMS): This comprehensive system serves as a central web-based IP catalog management system for both internal and external IP. The IP that can be cataloged
includes digital, analog, and process technology data; also hard IP, soft IP, verification IP, and software. The database also stores IP metadata such as IP descriptions, usage, ownership, etc. The system is highly extensible and can interface with many different database formats, including existing IP repositories based on D’Assault, SQL, DB2, CVS, and custom internal systems. A keyword search allows users to locate IP by function, node, process, etc.

The ChipEstimate.com IP Ecosystem
: ChipEstimate.com is the largest integrated IP Ecosystem available in the world today. Its central online IP database boasts more than 9,000 components from over 200 suppliers. In addition to the IP functionality itself, ChipEstimate also provides quality data, technical data, and documentation.

 

                            

                       Figure 3. Cadence’s real-world, next-generation chip planning system.


Equipped with all of this data, system architects can generate accurate estimates for such things as die-size, power consumption, performance, and cost. Die-size estimation takes full account of all factors that will form part of the physical implementation. For example, in addition to the IP blocks and functional blocks themselves, die-size estimation also considers things like clock trees, test structures, memory halos, etc. Furthermore, a hierarchical, size-accurate, area-based floorplan may be generated, edited, and exported to mainstream floorplanning tools.

In the case of profile-based power estimation, both static and dynamic power analysis are taken into account, and it is also possible to model power consumption across the chip’s various operational modes (running
versus standby, for example). Sophisticated power estimation algorithms consider factors such as cell library data, process data, parasitic data, clock loading, I/O loading, etc. The resulting power profile and analysis shows active power and static leakage for each IP block, memory block, the chip core, the I/O, and the full chip, and includes comprehensive reports and statistics for each operating mode.

Another very important capability is the economic analysis, which takes all factors into account (IP, manufacturing process, chip packaging, etc.) to estimate yield, die costs, packaging recommendations and
pricing, test and assembly costs, production chip costs, and so forth. The ability to perform tradeoffs between technical and economic variables allows system architects to better understand the relationships between cost, die size, power, packaging, and more. Furthermore, lifecycle analysis algorithms allow project managers to
forecast chip costs over time as wafer prices, defect densities, and other factors improve.

A brief summary of the comprehensive chip planning and estimation capabilities offered by this system are as follows:

* Estimation of die area (bounding, utilization)

* Estimation of power consumption (dynamic, static)

* Estimation of performance (achievable speeds, logic levels)

* Estimation of packaged chip cost (yield, package, NRE, ROI)

* Generation of customizable reports (datasheet, IP BOM, budgetary quote)

* Generation of block diagram (architectural design intent, connectivity)

* Generation of early floorplan (visualization of physical design intent)

* Generation of EDA data (LEF/DEF, Verilog, SDC, CPF, Spirit XML, scripts)


Of course the real question would be “Just how accurate is this data?” There’s no point in basing your strategy on estimations that bear little relationship to the final reality. If fact, keeping in mind that these estimations are produced at the pre-RTL design specification stage, empirical data proves that they are actually surprisingly accurate.

The following silicon correlation results (comparing pre-RTL estimates with final post-place-and-route GDSII) come from a sample size of 120+ designs from 20+ customers across a range of technology nodes. In the case of die area, 93% of designs have 90%+ accuracy, while 70% of designs have 95%+ accuracy as illustrated in Figure 4. Similarly, in the case of power consumption, 90% of designs achieve 60 to 70% accuracy as
illustrated in Figure 5.

 

                             

Figure 4. Die area correlation data

 

                             

                            

                                               Figure 5. Power consumption correlation data

 

Actually, these results might be considered to be staggeringly accurate when compared to their traditional counterparts generated using spreadsheets or pen-and-paper. How can this be?  Of course the explanation is completely logical, because some system architects just make best-guess estimates which turn out to be widely inaccurate.

Alternatively, more accurate estimations can be obtained by referring to .LIB and .LEF files to obtain source data for timing, power, and physical characteristics. From this information, it is possible to build models in order to perform die area estimations.  The downside is that it can take a long time to collect all the necessary data in put it in a usable form. Also, since it is performed by hand, this work can be tedious and prone to error, and even small errors can be greatly magnified by the estimation process to produce incorrect results that
can lead to catastrophic chip failure. 

This is where CCPS comes into play.  CCPS removes the element of human error by automatically extracting the necessary technology information. CCPS also increase productivity by having models readily available for
anyone to use. The end result is to provide system architects with the maximum available knowledge to drive their total chip estimations to help make smarter decisions earlier and give the visibility needed in order to architect their chips the right way, the first time (does “Measure twice, cut once,” ring a bell?).


Conclusion

The integration of the Cadence Chip Planning System (CCPS), the Cadence IP Catalog Management System (IPMS), and the ChipEstimate.com IP Ecosystem provides an exceptionally powerful platform that allows all
parties involved in the development of an SoC to contribute in a collaborative manner.  IP developers now have a formal flow for updating and maintaining IP data by means of the IPMS. And whenever external IP is required, data from ChipEstimate.com is readily available for use by the chip estimation tools in the CCPS.

Before undertaking a new SoC project, system architects and project managers can perform quick “what if” analysis to ensure that such a project is worthwhile. Once a project is underway, the ability to accurately estimate the area, power consumption, and cost associated with different IP blocks implemented in different technology nodes allows the system architects to arrive at the optimum architecture and to select the most appropriate silicon chip process. These decisions allow the team to achieve all of their design goals for the least cost.

Furthermore, the CCPS and the IPMS are configurable, allowing them to be customize based on user and corporate needs.  By supporting a “Measure twice, cut once” methodology, this design-centric and IP-centric platform provides a Win-Win for all stakeholders.


For any inquiries on this article, please contact Dr. Anis Uzzaman (uzzaman@cadence.com).

 

About the authors

Dr. Anis Uzzaman is a Business Development Director at the Cadence Chip Planning Solutions Group and is responsible for worldwide business strategy and growth for the Chip Planning Solutions Products and Technologies. Anis has been with Cadence for more than 10 years now and prior to Cadence, Anis worked on EDA solutions for IBM Microelectronics. Anis has a Ph.D. in Computer Engineering from Tokyo Metropolitan University, Japan and an MS degree in Electrical & Computer Engineering from Oklahoma State University, USA.

Kenneth Chang is a Sr. Product Manager at Cadence Design Systems for Logic Design and Verification solutions with 13+ years of experience in SoC design, implementation and verification.  In this capacity, he is
responsible for leading frontend initiatives driving silicon realization working closely with customers and R&D.
Prior to joining Cadence, Kenneth was a senior ASIC designer and implementation leader leading a number of high profile startups and large company, developing bullet-proof methodologies with world-class teams and designing and delivering a plethora of complex large SOCs with an array of interesting and advanced IPs.


Comments

blog comments powered by Disqus