Home > Community > Blogs > System Design and Verification > tlm the year in review
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the System Design and Verification blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

TLM: The Year in Review, and Trends for 2012

Comments(2)Filed under: Hardware/software co-verification, High-Level Synthesis, verification, SystemC, TLM, ASIC, hls, C-to-Silcon, TSMC, System Realization, C++, system design

2011 was my first full year in the land of Transaction-Level Modeling (TLM) design and verification, after spending my entire career to that point in RTL. I made my move upward in abstraction level in mid-2010 because it seemed like the time had finally come for this methodology to start becoming mainstream, delivering the benefits that have been sought for years.

Has this methodology started to become mainstream? I think it's safe to say that it has started. We have been working with some large semiconductor companies who say that they do not want to write RTL any more. They need much faster verification turnaround and much easier re-use of their blocks, and TLM delivers on both fronts. But we are still in the pilot project phase in many cases because this is a new methodology that requires transition and education.

Of course Japan has been the leader in adopting high-level synthesis, and it has delivered great time-to-market benefits to that consumer electronics dominated region. This continued in 2011, but we have also seen expansion into companies in all other regions as well. If you look at the EDAC numbers for category 2.1 ESL Synthesis, you can see that revenues in 2010 were 28% higher than 2009. And Q1-Q2 2011 were 54% higher than Q1-Q2 2010. A lot of this is because today's high-level synthesis tools, like C-to-Silicon Compiler, deliver Quality of Results that meet or beat hand-written RTL. Most types of designs cannot afford to sacrifice QoR (at least not too much) for the sake of schedule.

We heard a great story from Renesas RMS at the SystemC Japan conference, about how 2 SystemC "beginners" were able to design, verify, and synthesize a 17M gate design in 8 months. That's a great productivity enhancement. But the best part was that they were able to achieve first-pass timing closure at 650 MHz in 40nm, which is not trivial. This is a great example of how this higher level of abstraction enables quantum leaps in productivity, and now without having to sacrifice QoR.

But the quantum leap in productivity requires more than just high-level synthesis. After all, one of the biggest benefits customers see by moving to TLM is a reduction in the verification-debug cycle that is typically in the 30-50% range. This is because verification at this level means you can start to build and test your verification environment without having to wait for the design team to find the desired micro-architecture and write the RTL. And higher abstraction means faster simulation runtimes and much easier debug. But this all requires verification environments to be extended up to TLM, and set up so that verification is performed when new functionality is introduced, with that functionality then regressed at later stages. We call this "multi-level verification," and putting it all together in a metric-driven way is key to realizing the savings mentioned above. We have been working with customers to roll this methodology out, and it is also now available as part of TSMC Reference Flow 12.

RF 12 also introduces virtual prototyping for early software development. The nice thing about this methodology is that the virtual prototype is also written in SystemC TLM. Granted, it is more abstract than what will be synthesized -- after all the software doesn't care how the hardware is threaded or how the functional unit talks to the bus. However, by using the same base model for the prototype and for refining toward hardware, the chances for functional differences between the two are greatly reduced. This delivers great benefits for the reliability of the hardware-software interaction. Our own Michael "Mac" McNamara wrote a nice article outlining the benefits of this approach

So what will 2012 bring? I'm not one for bold predictions, but I've seen the beginning of some trends that I think will start to pick up in 2012:

  • We will see a large uptake in TLM design and verification in major semiconductor companies outside of Japan. We have heard numerous customers this year say that they do not want to write RTL any more. The main reasons cited for wanting to move up in abstraction are verification productivity and easier re-use. The IP re-use argument especially makes sense for large semiconductor companies, who can amortize the effort of IP development more readily over many SoCs.
  • The previous trend will logically increase the demand for SystemC training and educational material. Even though this methodology is C++-based, because it's designing hardware it really requires hardware design expertise. It is not as simple as taking an algorithm and building a chip -- you need to understand how to partition into parallel threads, how it will communicate with the system bus, and how to set the high-level constraints to meet the needs of the target application. Most hardware designers have some C background, but often it's well in the past. SystemC is just a class library with hardware-specific constructs, but the newness of it combined with the rustiness of C skills for most designers combine to give rise to the need for education.
  • We will see more systems houses do their own chip design again. Back in the early 90's, the ASIC vendor model enabled systems companies to design their own chips. Consolidation into SoCs enabled by Moore's Law and forced by the economics of design (really, verification) effort and mask costs ended this trend, with systems companies gravitating more toward standard parts and differentiating in software. However, we see trendsetters like Apple show that there is still room for systems companies to differentiate with better chips. If TLM can rein in the design and verification effort, a systems house can again look at designing a differentiated hardware plus software platform that can be used across their portfolio of devices in order to make the economics work.
  • If more systems houses do their own chip design, there will be more demand for implementation services. This is similar to the ASIC vendor model re-born, except the company performing the service will not be manufacturing the chip as well. And of course the handoff point will be different. But eSilicon, Global Unichip and Open Silicon, amongst others, offer much of this today. What we would need is a well-defined methodology for whatever gets handed over (is it the TLM? Or the RTL?)

Anyway, that's a few of my thoughts. What do you think will happen in this space in 2012? I'm looking forward to an interesting new year!

Jack Erickson

Comments(2)

By Krupa Shah on January 4, 2012
Your article is really informative and it seems that now its time to move towards SystemC and TLM. And now we also have CtoS kind of simulators as well.

By Muralikrishna Pattaje on January 7, 2012
The TLM or the class based verification methedology is not so amazing. I am working on porting the older verilog test bench to new UVM test bench. Now I need to worry about the complexities of object/class communication as well as the design. So the work is more challenging and takes more time. I need to wait and watch if it is really worth.
May be starting the environment from the scratch will be more rewarding??

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.