Home > Community > Blogs > Industry Insights > q amp a interview kaufman award winner discusses verification advances
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Q&A Interview: Kaufman Award Winner Discusses Verification Advances

Comments(0)Filed under: Industry Insights, verification, Kaufman Award, Randal Bryant

Randy_Bryant

Dr. Randal Bryant, Dean of Computer Science at Carnegie-Mellon University, will receive the EDA industry’s highest honor – the Phil Kaufman award – in November for his pioneering work in formal verification and simulation. In this interview, he answers questions about his past contributions to EDA and shares his perspectives about today’s formal verification technology. The Kaufman award is given annually by the EDA Consortium and the IEEE Council on Electronic Design Automation.

Q: What is your response to winning the 2009 Phil Kaufman award?

A: I’m very pleased. One thing I like about the award is that it’s a reflection of the tight link between academia and the EDA industry. The research work that’s done by universities can be adopted very quickly by the EDA industry. You don’t see that in a lot of other fields.

Q: As a PhD student at MIT in 1979, you developed the first switch-level simulator. Why was that a step forward at the time, and is the technology still used today?

A: The idea behind switch-level simulation was to model a transistor circuit, but treat transistors as simple switches that are either open or closed. It’s purely at the logic level and you don’t try to capture circuit timing very closely. By contrast, the only transistor-level simulator at the time was Spice. Spice is an amazing tool, but the advantage of switch-level is that it can really do chip-level simulation.

With the move to standard cell design, switch-level simulation isn’t used as much today. But people who do full-custom design use switch-level simulation. People who design specialized circuits like embedded memory use it. It’s useful for classes of circuits where you really can’t come up with a gate-level representation.

Q: What you may be best known for, in EDA circles, is your 1986 paper on binary decision diagrams [BDDs]. This helped lay the foundation for formal verification. What was significant about the paper when it came out?

A: One contribution was the data structure itself and how it works and the cool stuff you could do with it. It was one of the first tools for solving heavy-duty Boolean problems. Nowadays we have very good Boolean SAT [satisfiability] solvers, but at the time, they were very weak. If you tried to throw any complex question into a SAT solver, like “is this fault testable,” it would just die. BDDs could handle it. And they can do more than Boolean satisfiability, because what they encode is all the ways a Boolean formula can be true, whereas a SAT solver just gives you one of those ways.

The idea [for BDDs] goes back to the 1950s. My work was to refine them and show some algorithms that can build upon and operate on BDDs. My first thinking about [BDDs] was for test generation.

Q: When were BDDs applied to formal model checking?

A: In 1987 we had a graduate student here, Ken McMillan, who is now at Cadence. He was Ed Clarke’s graduate student. Within two weeks of arriving on campus, he figured out a way to take my [BDD] work and Ed Clarke’s work and create the first symbolic model checker. Today just about all model checkers are symbolic.

Nowadays, equivalence checkers use a variety of strategies including SAT checking and BDDs. When they work well, BDDs are excellent. Tools will often start with a BDD-based strategy, but they need some fallback in case the memory requirements become unreasonable.

[Editor’s note: In 1997 Bryant, McMillan, and two others shared the ACM Kanellakis Award for their pioneering work in BDD-based symbolic model checking].

Q: Do BDDs have some limitations?

A: Yes. The biggest problem with BDDs is the memory requirement. They tend to either come back with results almost instantly, or they get mired down using so much memory the machine starts thrashing.

Q: Another technology you developed is symbolic trajectory evaluation (STE), a formal method based on symbolic simulation. What is it and how has it been applied?

A: STE is a form of symbolic simulation that provides capabilities similar to model checking, but in a more limited way. It’s done by simulating a circuit, but the circuit returns not just 0s and 1s but functions of different values. It is used heavily within Intel and similar tools are used internally by other companies.

Q: What’s your assessment of today’s formal verification market?

A: It hasn’t really taken off. There are many instances of tools that showed great promise and didn’t quite make it. Companies with strong internal EDA groups have been more successful than those trying to buy commercial tools. In particular, companies like Intel and IBM make a lot of use of formal verification. It works for companies where they’ve really developed a culture of expertise.

Q: What needs to happen for formal tools to be more widely deployed?

A: The tools have to get better and more turnkey, and you have to have a motivated and trained community. People have to be committed to it and be willing to make changes to the design flow. A recent move that’s been successful is to call them property checkers and not claim to do full formal verification – it’s a great way to track down whatever bugs you want to look for. We also have newer languages describing properties. These are steps in a direction that will help people understand the possibilities.

Q: Can formal verification be applied to software validation?

A: It’s a harder problem. With hardware you kind of know all the states of the system, and even though it’s exponential complexity, you’ve got a handle on it. In a software program it’s more difficult to track all the different variables in the program and their possible values. There’s been some success in very specific domains. The most impressive, actually, has been Microsoft – they’ve developed a number of tools that use different formal verification techniques to look for synchronization errors.

Q: What are formal verification researchers looking at today?

A: There’s a lot of interest in software. I think it’s sort of a natural extension. There’s an urgency from some companies, and there are a lot more bugs.

Q: What are your current research interests?

A: I’m a dean of the school of computer science, so my work often has to do with being a dean. I’ve also become very interested in what can be done with large sets of data from a computational point of view. A great example of that would be a company like Google and all the web pages and information they gather. I think there is a potential EDA angle. If you think about all the simulation runs that are done, mining that data and figuring out what’s going on with it would be interesting research.


Richard Goering

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.