The shift toward electronic system level (ESL) design and verification is beginning as the productivity of RTL modeling and verification techniques lag behind the remarkable growth of design complexity.
ESL methodologies focus on the architecture of the design, raising the level of abstraction for design, modeling, and validation to the transaction level. Transaction-level hardware descriptions provide much faster simulation times, making it a viable solution for architectural analysis, software development, and hardware/software co-verification. Transaction-level modeling (TLM) allows more compact descriptions because hardware system block interfaces are described using function calls to communicate, versus detailed signal-level handshakes, significantly reducing simulation time.
Working at the ESL, designers can quickly analyze various architectural tradeoffs between power, performance, and area; begin software coding much earlier in the design cycle; and create virtual prototypes for software development and hardware/software integration. These advantages more than justify moving up a level of abstraction.
Fortunately, recent developments in ESL technology make it easier to adopt ESL architectural design and virtual prototyping solutions, while also increasing its benefits. The introduction of the OSCI TLM2 standard, scalable transaction-level models, and automated model building have overcome the obstacles to widespread adoption that remained with earlier ESL technologies.
Models that Talk
Moving modeling, validation, and analysis above the RTL requires a mechanism that models communication and functionality at a higher level. TLM provides this by abstracting cycle-by-cycle hardware signal changes as function calls, using higher-level descriptions and more abstract data objects. However, there are different ways to build transaction-level models, but only if it is done correctly will the barriers to ESL adoption come down. First on the scene were proprietary models.
The biggest problem with proprietary transaction-level models is that they are not interoperable. These custom-built models are created for very specific purposes. Because their applicability is limited, companies are reluctant to invest in them. Furthermore, because each company has their own way to design and define how the model is structured and how they communicate with each other, these models require custom wrappers to connect them into a target platform, which make them both expensive and slow. This makes it impractical if not impossible to leverage these models across the industry.
Before a TLM standard existed, the most effective transaction-level model usage was confined to discrete situations that only companies with large modeling resources could afford. The lack of a standard effectively blocked deployment of ESL at that time.
The reason for standards in the first place is to make models reusable and interoperable throughout the design community. Once the TLM1 standard was established, companies felt much more comfortable investing in transaction-level modeling, knowing that the models were sustainable. However, the TLM1 standard fell short on interoperability and did not deliver the expected simulation boost, again frustrating the move to ESL. These shortcomings were recognized by the Open SystemC Initiative (OSCI) standards body and addressed by the OSCI TLM2 standard.
TLM2 overcame the first obstacle for adoption of ESL by establishing the infrastructure for ESL design across the industry and by supporting reuse and interoperability among IP, semiconductor, and system companies. Yet interoperability and portability alone are not enough. There is a need to reduce the modeling effort required to create the transaction models in order to justify the level of investment and improve productivity at the electronic system level.
Making Models Easy to Create
The answer was a single scalable model that handles all ESL abstraction levels and design tasks. This degree of scalability has been achieved through a new modeling concept based on the clear separation between communication, functionality, and the architectural aspects of timing and power.
Figure 1: A scalable transaction-level model entirely separates functionality from the timing and power architecture as well as the communication layer, allowing them to be connected and disconnected on the fly.
By keeping functionality and timing decoupled from each other, scalable transaction-level models allow timing detail to be added, changed, or subtracted as needed and maintains a single behavioral description throughout the design flow. This has an enormous impact by simplifying code complexity and the ability to make frequent changes to a design.
If we take a simple direct memory access (DMA) as an example, the pure, high-level functional model of a basic DMA operation can be written with two or three lines of code. When it becomes necessary to model the timing of all the individual transactions to get an approximation of the time required for the transfers, the DMA representation can increase in detail from three to 30 lines of code, and simulation is ten times slower. When it is appropriate to apply the even more detailed cycle accurate transaction-level model, the description explodes in size to hundreds of lines of code. Simulation speed slows down by another order of magnitude, and it is much more difficult to modify the code.
Making the Models Easy to Use
With a scalable modeling strategy established, a tool was needed that would make these models easy to create. Built on top of the scalable transaction-level model concept it created, Mentor Graphics® introduced the Vista™ Model Builder—in essence a modeling wizard that allows users to define the interfaces and the architectural attributes of the model. The wizard allows users to define the basic elements of the transaction-level model and generate a model “skeleton” very quickly. Users can either type a few lines of code or select a few options, and it will generate a model that has hundreds of lines of code, saving a significant amount of time even for basic creation of the model.
As it can be very complex to model timing and power at the transaction level, Model Builder timing and power wizards provide policies that allow users to define timing and power in an intuitive way, which saves a lot of time in terms of modeling investment. The policies are captured in a very simple table. When the user hits the generate button, they get the complete model, generated for them in a consistent way.
Communication is defined using ports and protocols; a loosely timed (LT) layer models pure, untimed functionality; and an approximately timed (AT) layer defines timing and power information reflecting the architectural structure. LT and AT layers are then combined into a single scalable LT+AT model. Once such a layered model structure is achieved, architectural exploration becomes much more feasible. The LT+AT layered approach also allows a single source model to be used for different design tasks. Software validation can be done by switching off the AT layers in the virtual prototype in order to run pure untimed hardware/software simulations. Conversely, users can turn on the AT layer for performance analysis, when timing and power information is needed.
The model building wizard provides a way to apply timing and power attributes side by side with the functional description — reflecting various implementation and architectural scenarios — without any functional change. The transaction-level model’s core functionality remains separate and unaffected. Designers can actually test the impact of architectural changes without costly recoding. For example, a variety of macro-architectures can be tested by applying specific buffering and pipeline policies; various communication protocols can be applied; different input-to-output relationships can be used to model different IP algorithms; and different burst sizes can be specified to model a bus matrix, all with unique timing and power characteristics.
When it is time to synthesize a scalable transaction-level model to RTL, high-level synthesis (HLS) tools allow users to apply different constraints to the same, unmodified functional model, resulting in different RTL implementations. Thus, designers need to model the transaction-level functionality only once during the whole lifetime of a particular block. Even when constraints change, the RTL implementation will change, but the LT functionality remains untouched.
Timing and power, in the context of TLM, is the way transactions are distributed and consume power over time based on the internal implementation and system characteristics. These architectural considerations are also referred to as policies of the model, and in the transaction-level model, they may relate to specific ports and transactions.
For example, an LT transaction may send a complete packet of data over the port. Such a transfer is executed using one function call in the LT model. When all models are abstracted to this level, simulation speed is dramatically increased, and much faster validation of the functionality under system-level scenarios is possible.
When performance is to be explored, the knowledge about how this packet is transmitted is essential. Such packets are usually broken into smaller chunks that depend on the buffering characteristics, protocol, interconnect, and other factors. Different blocks may also process and transmit data differently based on their micro-architecture. An internal implementation may require breaking a packet into smaller groups that are processed in parallel. The model building wizard allows these characteristics to be defined intuitively, and allows users to quickly explore various macro-architecture scenarios and implementation alternatives very quickly. A tool that enables models to quickly exhibit very different architectural and implementation characteristics can be used to optimize a single block and, more importantly, to optimize the entire system. Users may apply different policies to different blocks, and test them in a system context without implementing a single block in RTL.
Defining characteristics at the interface level allows the designer to quickly model the impact of architectural decisions in a top-down approach without the need to modify the model behavior and without the tedious design work of RTL implementation. This what-if approach allows quick experimentation with a multitude of architectural possibilities, so that architectural recommendations can be quickly provided to the RTL implementation team as timing and power constraints for each block.
Delivering the Promise of ESL
The introduction of the OSCI TLM2 standard brought the promise of ESL within reach. Scalable transaction-level models supply the infrastructure for the entire ESL design flow. It is critical to have simplified modeling practices and tools, which filter out language and complex semantics issues and allow designers to concentrate on pure functionality and architectural issues. The scalable transaction-level model approach allows users to quickly explore various complex micro-architecture alternatives in the system context with minimal coding effort while keeping the code representing the functionality intact.
About the Authors:
Yossi Veller is the chief scientist in the Mentor Graphics ESL Division. During his long software career, Yossi has led ADA compiler, VHDL, and C simulation development groups. He was also the CTO of Summit Design. He holds degrees in computer science, mathematics, and electrical engineering.
Rami Rachamim is a Product Marketing Manager in the Mentor Graphics ESL Division. Among his accomplishments during 20-plus years in electrical engineering, Mr. Rachamim was a founder and VP of Marketing at Summit Design. He earned a B.Sc. in Electronic Engineering with honors from Tel Aviv University in 1988.