Home > Community > Blogs > Industry Insights > what lies beyond the systemc tlm 2 0 standard
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).


* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

What Lies Beyond The SystemC TLM-2.0 Standard?

Comments(0)Filed under: Industry Insights, SystemC, OCSI, TLM, Virtual platform

By supporting SystemC model interoperability, the Transaction Level Modeling (TLM-2.0) standard from the Open SystemC Initiative (OSCI) was a watershed event in the development of ESL flows. But it was not the final answer. At a recent virtual platform panel, participants noted that TLM-2.0 is a good start, but much more remains to be done. For example, panelists talked about debug tool interoperability, configuration and control, model interfaces, and defining “best practices” for model development.

To discover what more needs to be done with respect to model interoperability, I recently sat down with some of Cadence’s SystemC experts. We talked about what TLM-2.0 has accomplished, and what still remains to be done.

First, some background. In 2005, OSCI introduced TLM-1.0, which defined a standard set of APIs for transaction-level communications. This standard, however, didn’t define the content of those communications. TLM-2.0 defines the content of transactions with a “generic payload” that describes the necessary data structures. TLM-2.0 was announced in July 2008.

“TLM-2.0 has done a pretty good job of addressing interoperability, at least for memory-mapped busses,” said Neeti Bhatnagar, engineering director at Cadence. She noted that TLM-2.0 processor IP is beginning to appear, although there’s still a lack of other types of IP. EDA tools are supporting TLM-2.0 as well. For example, Cadence in May announced an expanded verification solution that natively recognizes TLM-2.0 constructs in order to automate debug and analysis without requiring any model instrumentation.

Neeti said there are currently two main types of users for TLM-2.0. Most commonly, this standard is used for virtual platform development, where designers use the “loosely timed” (LT) TLMs defined by the standard. The other use is architectural performance analysis, which generally involves more accurate “approximately timed” (AT) models.

What’s missing?

Now that TLM 2.0 is available, we are beginning to see areas where additional standards or interoperability guidelines may be needed. For example, some users are interested in cycle accurate modeling using TLM 2.0.

“TLM-2.0 defines cycle accurate but doesn’t say too much more about it,” noted George Frazier, senior member of consulting staff. Neeti said there seems to be some interest right now in using cycle-accurate models with TLM-2.0, and some users are trying to figure out how to do so for more accurate performance analysis, but it is really loosely timed modeling that has seen the most proliferation and interest.

Neeti also observed that although processor models are becoming available in TLM-2.0 wrappers, each processor model has its own software debugger. There is no standard debug API for processor models, and processor debug tools are not interoperable in a standard way.

TLM 2.0 allows temporal decoupling where a process can run ahead of the simulator to provide a significant increase in simulation performance. When users plug together models from different vendors to assemble virtual platforms, it could be challenging to fine-tune temporal decoupling across all the processes in the system to arrive at the necessary accuracy-versus-performance tradeoff. “Things could get tricky because models are not running in lockstep,” said Bishnupriya Bhattacharya, senior member of consulting staff.

CCI provides the “next step”

In February 2009, OSCI introduced the Configuration, Control and Inspection (CCI) working group. This group seeks to develop “instrumentation standards” for models from different providers, making it possible to configure and control models and simplifying system-level debug and analysis. According to the OSCI announcement, the group is considering configuration parameters, register characteristics, power and performance data probing, command interfaces, save/restore, and other issues related to configuration, control and debug. (Cadence already supports save/restore for SystemC as announced in May as a part of the expanded verification solution).

Bishnupriya put it this way. “TLM-2.0 is about model-to-model interoperability. CCI is trying to accomplish the next step, which is model-to-tool interoperability. One of the first few areas it’s looking into is how you can make parameters part of the model definition, as opposed to being tool specific.”

The CCI working group is still gathering requirements, and the full scope of what it will consider is not yet determined. There has been little coverage of CCI in the trade press. But anyone interested in ESL flows should keep an eye on this evolving effort. It may help define the next level of SystemC model interoperability.

Richard Goering


Leave a Comment

E-mail (will not be published)
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.