Emulation unbound

by Jim Kenney , TechOnline India - June 20, 2011

Companies that could benefit from emulation have been scared by all of the external hardware and the lack of a flexible usage model. With co-modeling and virtualization, the emulator becomes an emulation cabinet with a rack full of workstations next to it in it, and it’s all completely software configurable, very easy to set up, and it can be quickly reconfigured to connect different peripherals and different software debugging technologies—all through a GUI. This is a very familiar configuration that will make it comfortable for engineering teams to fully and effectively utilize the power of emulation unbound from the limits of the physical world.

The power and utility of emulation is no longer bound to physical limitations. Until recently, all interactions between a design compiled into an emulator and its peripherals was done via hardware devices that represented these peripherals. Functionally, there isn’t a problem with this; however, it does encumber design teams with two important limitations stemming from the sheer number of hardware connections required.

First of all, the time and care involved in connecting an intricate and large amount of external hardware to the emulator locks it down to a single project that needs those particular peripherals. If the emulator has some free cycles, another project team would have to disconnect all of the peripherals and cable up their in-circuit emulation solution, and then put it all back together again for the owning project. In reality, that just doesn’t happen. So the emulator sits idle while project teams are desperate for emulation cycles but can’t get them.

Secondly, the more physical connections there are, the more things that can go wrong. When you start making many connections, reliability goes down. You’re going to bend some pins, have some bad cables, bad connectors, loose connections, and so on. And if something fails, finding the cause is like untangling a Gordian knot.

A promising solution is to virtualize the peripherals. Virtualization takes the hardware out of peripheral representations. Instead, part of the peripheral is modeled in the emulator and part of it is modeled as an application in a workstation, using co-modeling channels and transactors instead of physical cables and equipment.

The entire environment, both the design inside the emulator and the devices needed to exercise the peripheral interfaces, is a virtualization and can be software configured. The emulator becomes a flexible, general purpose resource, where any project team can schedule or grab free time on the emulator. They can virtually configure the emulation instantaneously, run their jobs, and then the next group can reconfigure it to suit their design.

Already multimedia and video, Ethernet, PCI Express, and SATA controllers can now be done virtually and are very easy to configure. Emulation solution vendors will keep broadening this portfolio to other disk drive protocols like SAS and any other peripheral that you can imagine. Virtualizing the peripherals increases the utilization of the emulator. People don’t have to go into the lab and physically reconfigure it with external hardware solutions. It can be reconfigured with these virtual solutions by software from any location on the planet. So, more people get to share in the advantages of emulation: high performance, capacity, system-level design and verification, and so on.

Emulation has always been about performance and capacity—faster than simulation, big enough for the largest design—and it still is. But the technology is evolving beyond these two fundamentals in terms of all you can do virtually that used to require a physical hardware connection.

Driving this paradigm shift is the co-model channel technology that underlies
virtualization. Already it has radically changed the landscape by unleashing three powerful emulation solutions: simulation acceleration, software execution debug, and, of course, virtual peripherals.

                              

 

Figure 1: Co-model channel based virtual solutions offer three powerful emulation solutions: simulation acceleration, software execution debug, and virtual peripherals.

Co-model channel technology was developed to accelerate transaction-level testbenches, rocketing stimulus data from a workstation to transactors in an emulator over the co-model channels. Virtual peripherals utilize this same transport mechanism to stimulate a design.

How much faster is emulation? Bypass marketing claims and simply calculate it yourself. Assume the emulator will run your design at 1MHz, plus or minus a few hundred KHz. This is the frequency of the fastest clock in your design. Now determine how many times that clock toggles each second of wall-clock time during simulation. If your simulation can run 100 clocks/sec, the emulator will run 10,000 times faster. This is the best possible throughput and must be derated to account for testbench overhead. With a well written testbench it’s not uncommon to see derating factors of 20%, resulting in an acceleration factor of 8,000X. If your testbench performs a lot of calculations for each test vector sent to the design, the derating factor will be worse. 

Accelerated transactors run in the emulator and expand a transaction from the testbench into pin-level stimulus. A well written accelerated transactor can run at full emulation speed (1 MHz) and have a typical expansion factor of ~10X for a simple protocol. To keep the emulator fed and running at full speed your testbench must produce transactions at a 100 KHz rate. Anything slower will cause the emulator to wait, resulting in the derating factor mentioned above. There is a wide range in expansion factors, depending on the complexity of the interface protocol. Ethernet for example will expand a packet (single transaction) into hundreds or thousands of emulation clock cycles depending on the frame. With traditional simulators, designs that typically take days or weeks to simulate can be simulated in minutes using emulation: for example, emulation runs two week simulations in five minutes.

Software execution debug tools can also be connected to an emulator via transactors and co-model channels, allowing you to stream information out of the emulator that will drive off-line software debug. You don’t need a hardware probe for every type of processor you’re using. You use a co-model channel and the virtual capability to do the same type of debug you did before via JTAG. And it’s completely reconfigurable. This supports a lot more software engineers with the same amount of hardware.  

Customers have been concerned about how many IO pins were on the emulator. Going forward, the emulation solution architectural emphasis will be on improving co-model channel throughput, because more activity is passing through the co-model channels. Thus, as the emulators get faster, and as more applications are applied in parallel, the co-model channels won’t be the limiting factor.

All of this is going to change the way people work in the future. It’s going to broaden the application of emulation because a lot of companies that could benefit from emulation have been frightened off by all of the external hardware and the lack of a flexible usage model. With co-modeling and virtualization, the emulator becomes an emulation cabinet with a rack full of workstations next to it in it, and it’s all completely software configurable, very easy to set up, and it can be quickly reconfigured to connect different peripherals and different software debugging technologies—all through a GUI. This is a very familiar configuration that will make it comfortable for engineering teams to fully and effectively utilize the power of emulation unbound from the limits of the physical world.

 

About the author:




Jim Kenney has over 25 years of experience in hardware emulation and logic simulation and has spent the bulk of his career at Teradyne and Mentor Graphics Corporation.  At Mentor Graphics, Jim had held responsibility for analog, digital and mixed-signal simulation, hardware/software co-verification and hardware emulation.  He is currently the Marketing Director for Mentor’s Emulation Division. Jim holds a BSEE from Clemson University. Reach him at Jim_Kenney@mentor.com


About Author

Comments

blog comments powered by Disqus