Home > Community > Blogs > Industry Insights > q amp a after 20 years hierarchical design and verification gets real
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Q&A: After 20 Years, Hierarchical Design and Verification Gets Real

Comments(0)Filed under: Industry Insights, ESL, TLM, verification, Simulation, formal verification, top-down design, datapath, hierarchical design, high-level design, control, hierarchical verification, Kurshan

It seems like a simple proposition -- you should be able to design and verify at a high level of abstraction, without re-verifying everything at a low level. But after 20-plus years of discussion in academia and industry, that's still not the case for most design teams. Cadence Fellow Bob Kurshan (right) has researched this topic extensively and has some thoughts about what's needed to make "hierarchical" design and verification real -- and why it's going to happen.

Q: At a recent conference, you gave a presentation on "verification-guided hierarchical design." What's the basic idea behind this methodology?

A: It has a lot of names - I've been calling it hierarchical design or hierarchical verification. It goes back 20 years or more when people recognized the need to move verification, both formal and simulation, to a higher level of abstraction. It comes and goes in various forms. The latest form is TLM [transaction-level modeling].

Hierarchical design says that you start with high-level functional aspects of your design, and you first make a representation of those. Once you have those [functional aspects] in place, and have gotten the bugs worked out, then and only then should you start worrying about low-level architectural issues. Then you start fleshing out your design by adding low-level datapaths.

The whole philosophy of hierarchical design turns design as it's conducted today on its head. Today you start with a conceptual description, which leads to a high-level architecture that's going to implement functions. Then a functional spec is written up. The first thing a designer does today is to define the datapaths, which are like the skeleton around which the whole design is created.

Q: If the datapaths are the "skeleton," what's wrong with defining them first?

A: In a sense designers are trying to define the skeleton before they define the body. Sometimes it's better to start with the skin and flesh, which is the function you really want, and only add the skeleton later. That's the whole idea of hierarchical design. You start with a high-level behavioral representation of what the design is supposed to do,  and you write this in a complete way so you can actually do testing at a functional level. Only when you have the  function right do you add in the lower-level structures -- the ALUs and pipelines and datapaths.

It's actually a top-down, bottom-up process. You have datapaths and busses in mind, but when you start coding you don't start coding the datapaths first, you start coding the functional aspects first so you can test in the absence of the datapaths. You do that by abstracting the datapaths. You represent the datapaths with stubs that are actually semantic abstractions that you will eventually refine out to the full datapath.

Q: Why hasn't high-level verification caught on, after 20 years of effort?

A: There are two basic troubles with previous attempts to implement high-level verification. One is methodological, and the other is semantic or conceptual.

Taking the conceptual problem first, the last thing any verification manager wants is a process that will make verification even more costly. But the behavioral methods offered over the past decades have in effect done just that. They have required verification at a high level, and then required re-verifying everything at RTL.

As for the methodological problem, there is a risk in adopting a new methodology. People are very reluctant to make changes, and rightly so because of the costs.

Q: So how do we get around these problems?

A: We need a solution to the conceptual problem so that we can do less verification, not more. There has to be some kind of semantic connection between the high level and the next level down, because only then can you know that if you do verification at the high level, you don't have to redo it at the lower level.

Any time you are offering a new methodology, you have to have a roadmap associated with it consisting of what I call "small steps" where each step gives some benefit at a very small cost. A succession of small steps should lead to the technology you're advocating. Formal model checking is an example. It did not come into full-blown existence overnight; it was adopted incrementally.

Q: How do we establish a semantic connection?

A: Just very small modifications of what we are now calling TLM allow you to make a semantic connection between TLM and RTL models. As soon as you have that semantic connection, any property you have verified at a high level is guaranteed to hold at all lower levels. You do not have to repeat verification at the RTL.

Q: What's an example of TLM modification?

A: From a conceptual perspective, I think the most important thing we need to do is introduce non-deterministic delays. This doesn't require any new language features or syntax. It's very easy to do. You just add a synthetic input, and you assign that input to represent delay. To make a semantic connection between the high level and the low level, you need to be able to say, "I have consumed this transaction but the amount of time it's going to take is non-deterministic, because I don't know how many clock cycles that represents at a lower level."

Q: How are you working with Cadence customers on hierarchical design?

A: We're doing this in two ways. One is to demonstrate that we can accelerate RTL verification by introducing concepts for abstracting certain components of the RTL. Secondly, for customers who are already trying out SystemC TLM-based approaches, we are offering the ability to simply verify the consistency of the high-level models with the low-level models. It's a small step that doesn't require any change of methodology. We'd like to nudge them along a roadmap towards high-level design where RTL might be automatically generated with something like C-to-Silicon Compiler.

I have given many talks on this [hierarchical design] in public venues, both to Cadence customers and at technology conferences. (Note: Kurshan's presentation from a 2011 conference at New York University is available on line.)

Q: What are you hearing from customers about high-level verification?

A: I am hearing from customers who say, "our designers are completing their designs, and then sitting on their hands waiting for PV [product validation] people to find bugs. This is very inefficient. We want designers to start doing unit tests and start running verification on their own." We have been advocating this since time zero! Now the concept is being re-invented by our customers, which is fantastic.

Customers are starting to recognize the urgency of having designers do verification. It is going to accelerate the whole design process, making it possible to do testing more quickly and more upstream, and it will also feed directly into a hierarchical design methodology where we have designers running verification on high-level designs. All of a sudden everything is coming together and I couldn't be more thrilled about this.

Richard Goering

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.