Home > Community > Blogs > Industry Insights > unified verification coverage metrics not an impossible dream
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Unified Verification Coverage Metrics – Not An Impossible Dream

Comments(0)Filed under: Industry Insights, UCIS, unified verification coverage, OVM, verification

Will it ever be possible to come up with a unified coverage metric that tells you how complete a verification task is, regardless of which engines were used? It now appears things may be headed in that direction.

Many design and verification teams today use code coverage and/or functional coverage to measure simulation results. In a coverage-driven verification flow, these coverage metrics provide valuable feedback that results in very targeted simulation runs. In formal verification, the notion of “coverage” is less clear cut, but there are various forms of “assertion coverage.” A simple example is determining whether or not an assertion actually fired. Further, formal verification is often used to prove whether functional coverage points are reachable or not.

The problem is that formal and simulation engines have different metrics that are typically reported separately from one another. Because you don’t see a unified view, pulling together the results of multiple runs from multiple tools is a tedious and time-consuming process. Formal metrics are often “lost,” resulting in unneeded simulation runs. The problem worsens in multi-vendor flows, where each verification tool vendor has its own API, coverage database, and approach to verification coverage.

Cadence now offers a “check meter” capability in its Incisive formal and simulation products that provides formal metrics alongside testbench coverage metrics. “Our users can now plan for checks and coverage and then see the results from formal and simulation together in a unified view,” said Sarah Lynne Cooper Lundell, senior product marketing manager at Cadence. “This is important because it allows them to validate that all coverage goals were reached in the context of the checkers.”

The next step, she said, is to come up with “unified metrics” across formal, simulation and acceleration engines. The idea is that you’d be able to check off something when it’s verified, and see what has yet to be covered, regardless of which verification engines were used.

“You should be able to come up with one number,” said Tom Anderson, product marketing director at Cadence. “You should be able to see that across all platforms, all coverage types, and all regression runs that your number is 97.8 percent. And if that’s good enough, you’re done.”

The ideal situation, Tom said, is to use similar metrics for formal and simulation “as much as possible.” He said it should be possible to take what we know about code coverage in simulation, and translate it into something equivalent on the formal side. “The best way to get formal into the mainstream is to report back using the metrics people already understand,” he said.

This may in fact be possible. In a 2006 paper, researchers looked at each of the metrics used in simulation-based verification, and presented corresponding metrics that could be used in a formal setting. For example, if a line of code can be mutated or replaced and the verification still passes, the line can be left uncovered. An SCDsource interview last year with one of the authors, Hana Chockler of IBM’s Haifa Research Laboratory, provided more insights into the kinds of coverage metrics that might be useful in formal verification.

What about multi-vendor coverage? That’s where the Accellera Unified Coverage Interoperability Standard (UCIS) comes in. UCIS seeks to define standard coverage models and to define an interoperability standard that lets users exchange coverage data between different tools. “When it becomes standardized, it’s pretty likely you’ll be able to read and write coverage from anyone’s tool,” Tom said.

Cadence, Mentor and Synopsys are all participating in UCIS. Along with extending the Open Verification Methodology (OVM) with a unified coverage database, Cadence and Mentor have presented a joint proposal for an API and data model. Synopsys has presented an alternate proposal, and balloting is expected this summer.

One way or another, unified coverage metrics are on the way. What’s ultimately important is not a set of numbers or the engines beneath them – it’s a yes or no answer to the question, “am I done yet?”

Richard Goering

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.