The biggest challenge in system-level design is modeling, and getting SystemC models to work together in simulation environments has been a long, hard struggle. Finally, it appears that the stars are aligned and we are converging on the three things we need for SystemC model interoperability – a standard, models, and compatible tools.
SystemC itself arose because companies were creating their own C/C++ class libraries for high-level hardware and systems design, and those companies couldn’t exchange models with other companies or third-party suppliers. But just defining a common language isn’t enough. You also have to define a common methodology.
Early on, it became clear that SystemC could accelerate a move to transaction-level modeling (TLM) for both design and verification. The problem was that different companies had their own TLM versions, and they had to build specialized wrappers to understand each other’s models. So Open SystemC Initiative (OSCI) introduced its TLM-1.0 standard in 2005. This standard defined a set of APIs for transaction-level communications.
TLM-1.0 was not enough, however, because it didn’t define the content of those transaction-level communications. It defined how to communicate, but not what would be communicated. It’s somewhat like giving two people who speak different languages compatible two-way radios. Something is still missing.
Enter TLM-2.0, which defines the content of transactions with a "generic payload" that describes the data structures that are needed for address and data transmissions. In December 2006, OSCI released the first draft standard for TLM-2.0. It was held up for a year in a review and revision cycle. The second draft was released December 2007 and the standard was finally announced in July 2008.
As noted in an SCDsource article, the formation of the TLM-2.0 standard was not without controversy. The original TLM-2.0 draft concerned programmer’s view (PV) and programmer’s view with timing (PVT) models. The revised draft kept PV, retired PVT, and introduced “loosely timed” and “approximately timed” models. For those who want to know more, OSCI recently released a TLM-2.0 tutorial.
With a standard in place that promised true model interoperability, the industry breathed a collective sigh of relief. But you still need the models and the tools. ARM has been developing its Fast Model library, which consists of instruction-accurate models suitable for early software development. The Cortex A9 is a recent addition. Fast Models can run in the ARM simulation environment and can also be exported to TLM-2.0 SystemC environments.
Meanwhile, EDA tools are adding TLM-2.0 support. For example, the Cadence Incisive Enterprise Simulator 8.2 added that support in the fourth quarter, and this week (May 19), Cadence is announcing an expanded verification solution that natively recognizes TLM 2.0 constructs to automate debugging and analysis.
At the CDN Live! EMEA 2009 conference this week, ARM and Cadence will host a demo that shows what can be accomplished with SystemC interoperability. The demo uses an ARM Fast Model for the dual-core Cortex A9, provided in a TLM-2.0 wrapper. Using ARM’s platform development and debugging tools in concert with Incisive, the demo will show the execution of software that performs a DMA transfer. It’s a fairly simple task, but the same principle could be used to boot Linux.
One interesting point about this demo is that many ARM customers do not have access to RTL for processors such as the Cortex A9. With the Fast Models and TLM-2.0, they now have another option.
Are we done yet? No. Remember that TLM-2.0 focuses on three modeling styles. Standards have yet to be defined for cycle-accurate models, for example. Also, there are still many models that are not yet compatible with TLM-2.0, and tool support exists to varying degrees. But we’re making some real progress at removing what is probably the main bottleneck to the proliferation of system-level design – the availability of interoperable models.