Home > Community > Blogs > Industry Insights > author roundtable new tlm design and verification book
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Author Roundtable: New TLM Design And Verification Book

Comments(0)Filed under: TLM, verification, Design, McNamara, Bailey, Mosenson, book, roundtable, transcation, Stellfox, Watanabe

Cadence last week announced the publication of a new book entitled TLM-Driven Design and Verification Methodology. Available on-line (ordering information and preview here), the book describes in very practical terms what's needed to implement a transaction-level modeling (TLM) based design and verification flow. In this roundtable interview, four Cadence co-authors - Michael McNamara, Guy Mosenson, Mike Stellfox, and Yosinori Watanabe - join with another co-author, consultant Brian Bailey, to answer questions related to the book and its contents.

Q: There are many books about ESL and ESL-related topics. Why do we need a new one? What does this book say that hasn't been said before?

Bailey: As author of two of those other [ESL] books, I can say that most of the other books out there deal with tools or technologies. None of them have described a complete methodology that you can use with those tools. And that is one of the big things this book helps you achieve.

 

Mosenson: I think there is no other book that addresses design and verification together in such a thorough way. Also, the methodology is backed up with examples that demonstrate the whole methodology for design and verification. I think that is unique.

 

McNamara: The book addresses what's necessary to move to the transaction level, and looks at both design and verification. By describing both of those and how they work together, this is a stand-on-its-own textbook that shows you how to move design to the next level.

 

Q: Many arguments have been raised for moving from RTL up to a TLM level of abstraction. What are the strongest arguments? Is reduced SoC development cost one of them?

Stellfox: I think the biggest reason people want to move [to TLM] is the size of the designs we have to both implement and verify today. You need more abstraction just to deal with the size and complexity. On the design side, we see significant productivity improvements compared to the manual hand capturing of RTL, and on the verification side, we're seeing similar orders of magnitude improvements in run-time performance and verification throughput.

In the past a lot of the focus was only on the design side. By addressing both design and verification concurrently, we can have a significant impact on the overall flow, which will translate into reduced cost.

Watanabe: I would like to add two more arguments. The key words are "localization" and "reuse." Localization means that as soon as you make design decisions you verify them. In the current flow, design decisions can be made at different levels of abstraction, but verification is done only at RTL. Here [TLM flow], at each level of abstraction you make certain decisions and then you verify them. The verification assets used to verify those decisions can be reused at lower levels of abstraction.

Reuse occurs not only within a single project, but reuse across multiple projects, because we've raised the level of abstraction. Cost is not only about single projects -- it's about how to reuse assets across different projects.

McNamara: We're coming out of this global recession in which customers cut back our head count on teams developing hardware and software. As customers come out of that, we need to develop with smaller teams; we're not going back to the head count we used to have. By adopting TLM design and verification, our customers can develop a lot more products more efficiently using fewer people than were required in 2005.

Q: The OSCI [Open SystemC Initiative] TLM 2.0 standard, high-level synthesis tools, and virtual prototypes are already in place...why is it necessary to define a methodology?

Mosenson: The more complex the domain is, the more need there is for a proscribed experience of how to do things optimally. There are pieces of things that are valuable, but up to now there has not been a fully described methodology that shows how to go through this complicated process of designing and verifying through multiple levels of abstraction. The problem is complex and the answer requires a thorough tryout and well-written guidance and examples.

Bailey: One of the important aspects of ESL is that it's no longer single-domain, it's multi-domain. It's about design and verification, it's about hardware and software. We're going to see many domains coming together. As the solution gets larger and larger, we need methodologies to go across these domains to make sure everyone understands the process and the flow.

Q: The book advocates SystemC and talks about high-level synthesis. But there is no standard SystemC TLM synthesizable subset at present. How does the methodology get around that?

McNamara: There is a draft standard working its way through the [OSCI] committees. Already many companies, including Cadence, support more than what's in the draft standard. So when the draft standard becomes an actual standard, there will be a later version that expands the standard and covers additional areas. By necessity the standard is the least common denominator of what everybody supports.

Bailey: The OSCI TLM standard itself was defined with verification in mind. So Cadence has defined a synthesizable subset of that standard, which hopefully at some point will become an industry standard as well.

Q: One concept stressed in the book is the separation of computational and communications IP blocks. Why is this important?

Watanabe: Because of reuse, it is important to separate different concerns that are independent from each other. By separating these two, each model that is used for computation or communication has further chances to be used in another project. For example, a computation model can be reused in another project that uses other communications mechanisms.

We also separate behavior and timing. When we capture the model we try not to include timing unless it's absolutely necessary to define the behavior itself. You can see in chapters 4 and 5 how we make this separation.

Stellfox: By separating the computational and communications parts of design IP, we can verify them separately. You can verify TLM communications IP once, and then use it in a number of different blocks that are computational in nature.

Q: Do you envision the availability of third-party TLM IP?

Bailey: Absolutely. Some IP today is already available as TLM models, although we don't think of it as such. If you think about processors, instruction set simulators are TLM models. There will be a growing demand for other kinds of models. Over time, it will become a mandatory requirement for an IP developer to provide a TLM model.

McNamara: In the early days of RTL, there wasn't a rich IP market. But in today's market, if you're going to have a new abstraction, you need to have an ecosystem where people can exchange models and have confidence that they're going to run in various tools and be usable in in-house systems as well.

Q: As some of you have noted, the book emphasizes verification as well as design. What needs to change for verification environments to support a TLM based flow?

Stellfox: The first thing we did is to build upon some of the tried-and-true approaches that people have been using over the past 10 years. We extended the concepts of metric-driven verification, automated constrained-random stimulus, and functional coverage metrics. We also extended the OVM [Open Verification Methodology], and now UVM [Universal Verification Methodology], to operate on TLM models and multiple-abstraction designs. That means you could start with TLM or even with a pure C model, and eventually end up with an RTL model, with various features of the design verified in a localized way at the level of abstraction at which the feature is introduced.

Mosenson: I like to look at this as an evolutionary revolution. It's evolutionary because we build upon what exists. And yet because of the multiple abstraction layers, in a way it's revolutionary also. In this methodology, verification is multi-language more than in the past, and reuse is more critical than in the past. You are trying to reuse between different levels of abstraction. As Yoshi said earlier, you find the best localized place to verify, then you try to verify as early as possible and not repeat the verification.

Q: This book hardly mentions the term "ESL" and doesn't refer to an "ESL flow." Why is that?

Bailey: This book is a very down-to-earth, practical set of tools and methodologies to solve a particular part of an ESL flow. By calling it a TLM-driven design and verification methodology, we're a lot more specific about what the methodology does and what it doesn't do.

Q: Finally, what's the connection between this methodology and System Realization and SoC Realization as described in the EDA360 vision paper?

McNamara: [TLM] is a component of both System Realization and SoC Realization, and some of Silicon Realization. There's a design and verification aspect and some notion of getting the software in there at all these levels. [TLM] is really a foundational technology that helps at multiple realization levels.

Stellfox: TLM design and verification is an enabler for System Realization concepts such as quickly creating an architecture, exploring architectural tradeoffs, and supporting the development of software. It comes into SoC Realization and Silicon Realization through its connection to synthesis, RTL functional verification, and layout flows.

Notes: In an earlier interview, Brian Bailey described work he's done with Cadence on the development of a TLM design and verification flow. He also discusses the book in his blog at Techbites.com.

Felice Balarin of Cadence (left), not part of the roundtable discussion above, was also a co-author of this book.

Richard Goering

 

 

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.