Home > Community > Blogs > Logic Design > when will we move from rtl to tlm i need to know
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Logic Design blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

When Will We Move From RTL to TLM? I Need to Know!

Comments(0)Filed under: Logic Design, Synthesis, RTL, RTL compiler, C-to-Silicon, TLMMy esteemed colleague, Steve Brown, recently wrote a well-thought piece trying to forecast what it will take to move the bulk of design from RTL abstraction to transaction-level modeling (TLM). He uses the gate-level to RTL migration as a reference point so that we can learn from history.

He lists a lot of factors that enabled the mainstream shift from gate-level to RTL, and sketches out a similar list of what would be required to move from RTL to TLM. It's a long list. Having worked in the logic design area of EDA since roughly 1993, I'd like to offer my own take. And given that I'm product manager of a product named "RTL Compiler", I have a personal interest in understanding this.

Benefits

Nobody has time to change the way they do things unless they will realize significant benefit from doing so. Writing RTL code is not a bottleneck today, but verifying that RTL is. This is similar to the early-mid 1990's when gate-level functional verification became too cumbersome, and fast RTL simulation became available. Even though we made the jump to RTL, the verification problem has grown even more quickly. This is a function both of Moore's Law but also the increased need to verify hardware together with software. Moving to C-based TLM modeling helps address this nicely.

But without a sequential equivalence checking capability (SLEC), don't you have to re-verify everything at RTL?

Let's go back to our history lesson - when did logical equivalence checking become mainstream? Not until the early 2000's. Until then, you would just run a subset of your simulations on the gate-level design. But even today, there is still gate-level simulation, to verify asynchronous interfaces. In the TLM model, you will always have to run some RTL simulation to verify clock-dependent interfaces such as FIFO's and queues. There will also be more of a need to verify clock domain synchronization and such, but those should be able to be addressed using static methods.

Overall, I think we can safely say that the verification productivity net benefit is large.

Costs

What do you give up when you make the move? Or more specifically:

How can machine-generated RTL deliver better performance, power, and area than RTL created by an experienced designer?

This has always been a concern - and not a trivial one - as we moved from transistors to gates and then from gates to RTL. If the automated solution can deliver at least comparable results for a large majority of cases, then there is a net benefit. And we're seeing this today, for instance here, here, and here. This is largely delivered by connecting C-level synthesis to implementation by having real production RTL synthesis estimation built right in.

In fact, you're more likely to achieve better results using a methodology that lets you explore a larger solution space before you commit to implementation, and moving to a higher level of abstraction enables just that. And if there is still a couple critical blocks that need to be hand-implemented by a master craftsman, this is possible because all the other blocks can be done so much more quickly.

But don't you have to partition your control and datapath logic?

This is a big deal, but it is something else that the modern TLM synthesis tools have solved.

Therefore the costs associated with moving to TLM-based design are also below any sort of threshold that should prevent it.

Industry support and infrastructure

This is key in moving a product across the chasm and into mainstream adoption, and a lot of Steve's items fall in this bucket.

You need designers to learn the language. This isn't a huge stretch since this is C-based and most folks know C, but there are nuances specific to this implementation of C.

EDA vendors need to support a common language spec so customers can build a complete working flow. Verilog's success was enabled by Cadence opening up the language to OVI. OSCI's TLM 2.0 specification helps with this, and EDA vendors have build support around it.

There should be third-party IP available. This was recently highlighted by Richard Goering and Gary Smith. I think this one is not crucial at first, but obviously is necessary long-term. But this is something that will be in the best interests of the IP providers because it makes their IP less costly to develop and more scalable to deploy.

So while there are still remaining hurdles in this category, they are not high hurdles.

What is left to do?

Probably the biggest remaining issue is that today's designs utilize high amount of re-use. That means that this will be an evolutionary move for companies, during which the benefits of TLM adoption will be muted by Amdahl's Law.

Why not start the process now so that you can get to full benefit realization sooner?

When should I have my resume ready?

Looking at Steve's adoption graph, the movement from RTL-to-TLM is already slower than it was from gates-to-RTL. But the key pieces are in place for it to take off now. The good news is that a good logic designer will still be a good logic designer, no matter what language or abstraction level. The skills translate and you have more automation to be able to explore more. As for me (I know you care!) - RTL Compiler is embedded in C-to-Silicon, and we're using it to create a solid bridge from TLM down to placement.  Think of it as building another layer on top of the RTL-to-GDSII foundation. So yes, my future is in construction.


Jack Erickson

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.