My esteemed colleague, Steve Brown, recently wrote a
trying to forecast what it will take to move the bulk of design from RTL
abstraction to transaction-level modeling (TLM). He uses the gate-level to RTL
migration as a reference point so that we can learn from history.
He lists a lot of factors that enabled the mainstream shift
from gate-level to RTL, and sketches out a similar list of what would be
required to move from RTL to TLM. It's a long list. Having worked in the logic
design area of EDA since roughly 1993, I'd like to offer my own take. And given
that I'm product manager of a product named "RTL Compiler", I have a personal
interest in understanding this.
Nobody has time to change the way they do things unless they
will realize significant benefit from doing so. Writing RTL code is not a
bottleneck today, but verifying that RTL is. This is similar to the early-mid
1990's when gate-level functional verification became too cumbersome, and fast
RTL simulation became available. Even though we made the jump to RTL, the
verification problem has grown even more quickly. This is a function both of
Moore's Law but also the increased need to verify hardware together with
software. Moving to C-based TLM modeling helps address this nicely.
But without a
sequential equivalence checking capability (SLEC), don't you have to re-verify
everything at RTL?
Let's go back to our history lesson - when did logical
equivalence checking become mainstream? Not until the early 2000's. Until then,
you would just run a subset of your simulations on the gate-level design. But
even today, there is still gate-level simulation, to verify asynchronous
interfaces. In the TLM model, you will always have to run some RTL simulation to
verify clock-dependent interfaces such as FIFO's and queues. There will also be
more of a need to verify clock domain synchronization and such, but those
should be able to be addressed using static methods.
Overall, I think we can safely say that the verification
productivity net benefit is large.
What do you give up when you make the move? Or more
machine-generated RTL deliver better performance, power, and area than RTL
created by an experienced designer?
This has always been a concern - and not a trivial one - as
we moved from transistors to gates and then from gates to RTL. If the automated
solution can deliver at least comparable results for a large majority of cases,
then there is a net benefit. And we're seeing this today, for instance here, here, and here. This is largely delivered by connecting C-level synthesis to
implementation by having real production RTL synthesis estimation built right
In fact, you're more likely to achieve better results using
a methodology that lets you explore a larger solution space before you commit
to implementation, and moving to a higher level of abstraction enables just
that. And if there is still a couple critical blocks that need to be
hand-implemented by a master craftsman, this is possible because all the other
blocks can be done so much more quickly.
But don't you have to
partition your control and datapath logic?
This is a big deal, but it is something else that the modern
TLM synthesis tools have solved.
Therefore the costs associated with moving to TLM-based
design are also below any sort of threshold that should prevent it.
Industry support and
This is key in moving a product across the chasm and into
mainstream adoption, and a lot of Steve's items fall in this bucket.
You need designers to
learn the language. This isn't a huge stretch since this is C-based and
most folks know C, but there are nuances specific to this implementation of C.
EDA vendors need to
support a common language spec so customers can build a complete working flow.
Verilog's success was enabled by Cadence opening up the language to OVI. OSCI's
TLM 2.0 specification helps with this, and EDA vendors have build support
There should be
third-party IP available. This was recently highlighted by Richard Goering
and Gary Smith. I think this one is not crucial at first, but obviously is necessary long-term. But this
is something that will be in the best interests of the IP providers because it
makes their IP less costly to develop and more scalable to deploy.
So while there are still remaining hurdles in this category,
they are not high hurdles.
What is left to do?
Probably the biggest remaining issue is that today's designs
utilize high amount of re-use. That means that this will be an evolutionary move
for companies, during which the benefits of TLM adoption will be muted by
Why not start the process now so that you can get to full
benefit realization sooner?
When should I have my
Looking at Steve's adoption graph, the movement from
RTL-to-TLM is already slower than it was from gates-to-RTL. But the key pieces
are in place for it to take off now. The good news is that a good logic
designer will still be a good logic designer, no matter what language or abstraction level. The
skills translate and you have more automation to be able to explore more. As
for me (I know you care!) - RTL Compiler is embedded in C-to-Silicon, and we're
using it to create a solid bridge from TLM down to placement. Think of it as building another layer on top
of the RTL-to-GDSII foundation. So yes, my future is in construction.