Home > Community > Blogs > Industry Insights > dvclub talk software inspired technique predicts ic verification closure
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).


* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

DVClub Talk: Software-Inspired Technique Predicts IC Verification Closure

Comments(0)Filed under: Industry Insights, Oracle, Sun, verification, DVClub, HP, coverage, Smith, bug density, verification closure, processors, bug arrival, bugs, coverage metrics, Rayleigh curve, verification metrics

What's the hardest question for a verification manager to answer? Greg Smith, senior verification manager at Oracle, found that out soon after he moved from design into verification at Hewlett-Packard some years ago. The question is, "when will you be done?" At a DVClub Silicon Valley meeting August 17, Smith demonstrated a novel way to answer that question using a technique derived from the software testing world.

As I noted in a recent blog post, DVClub is an organization that holds lunch meetings for verification engineers in 10 cities around the world. Cadence is a long-time sponsor. The recent DVClub Silicon Valley meeting actually had two speakers from Oracle; I'll write about the second speaker, who discussed the use of a relational database for coverage collection and analysis, in a later blog post.

When Smith was confronted with the question, "when will you be done," he started thinking about conventional verification metrics. These included test plan completion, functional and code coverage percentages, bug arrival rates, RTL rate of change, and schedule milestones. "The real problem I have with all these metrics," he said, "is that they are all backward looking. They tell you what ground you covered but they don't provide a lot of predictive capability.  And they are all subject to the human errors of omission and commission."

Out of the "Stone Age"

Smith started looking for a quantitative, calculated way to predict and track verification schedules. He started researching software development metrics, and found a wealth of material. "We are in the Stone Age compared to software when it comes to using advanced techniques to analyze your design, and to construct predictive project metrics," he said.

What Smith discovered is a technique called "Rayleigh Curve Based Estimation" that makes it possible to predict how many resources are needed to test a design, when, and for how long. It also predicts bug arrival rates with a high degree of accuracy. (Smith referenced a paper by Holly Richardson that I have been unable to find on line, but some articles on the same topic are located here).

The "magic," as Smith described it, lies in the Rayleigh Distribution Model formula shown at right. To explain a little further:

Em = errors expected in this period
Td = total number of measurement periods (how long is the project)
t = elapsed time (which week am I in)
Er = total number of errors expected

The point is that you only need two pieces of information -- project duration, and the number of bugs expected. So how can one predict expected bugs? Smith went through databases for complete projects and figured out the bug density by looking at bugs per line of executable RTL code. He came up with a density of 1 bug/150 lines of Verilog, which incidentally is 2-3X lower than the norm for software bugs.

Moving to Processors

Smiths' technique worked well at HP, where he was involved in 10 successful ASIC tapeouts with first-time success. In his presentation, he showed how he discovered that some verification projects that looked like they were near completion really weren't. Since the technique can also predict how many resources are needed to find a given number of bugs per week, he was able to show when projects needed more resources to be completed on time.

When Smith moved to Sun Microsystems, which was subsequently bought by Oracle, a problem emerged. His technique worked fine for ASICs and large FPGAs, but what about processors? Here, bugs per line of code is not as meaningful, because processors are much more structured than ASICs. Smith did some research and came up with a way to estimate bug density by looking at the average number of bugs found per condition.

In a processor project that was just completed, Smith said, his predicted bug count from 2 years ago was only off by one percent. In conclusion, he noted, verification engineers can use this method to track their bug discovery rate versus an expected rate, providing a reference point to assess not only "am I done" but the harder question of "when will I be done."

Smith's presentation is available on line.

Richard Goering



Leave a Comment

E-mail (will not be published)
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.