Home > Community > Blogs > Industry Insights > archived webinar what verification coverage metrics to use when
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Archived Webinar: Which Verification Coverage Metrics to Use When

Comments(0)Filed under: Industry Insights, Metric-driven verification, Incisive, Enterprise Manager, MDV, webinar, assertions, coverage, vPlan, functional coverage, metrics, coverage metrics, verification coverage, verification plan, assertion coverage, Nehls, Brennan, code coverage

What metrics matter most at different stages of the verification process? How can metrics be leveraged to reduce the risk of failures in your IC designs? These questions were answered in a recently archived Cadence webinar that offers a comprehensive primer on the use of code coverage, functional coverage, and assertions in functional verification.

Titled "What Metrics Matter - a User's Perspective on Coverage," the webinar was presented by John Brennan, product marketing director at Cadence, and John Nehls, solutions architect at Cadence. The webinar is available for viewing here. Some highlights from the webinar follow.

At the start of the webinar, Nehls noted that the cost of fixing a bug increases exponentially the later the bug is found. "Putting additional effort into the verification process up front is what customers are trying to achieve, and the way it's done is through metrics," he said. "Knowing what metrics matter at certain points of the design process is really critical." Highly productive teams, Nehls said, use all three kinds of metrics - code, functional, and assertions - in a complementary way.

"Are We Done Yet?"

The question verification managers always ask, Nehls said, is "are we done yet." Often this is answered based on emotions or intuition. Perhaps the team is out of money or is exhausted, the competitor's product is already shipping, the software people seem happy, the boss says "ship it," or there are no bugs for two weeks. "What we're suggesting is a more rigorous criteria with concrete goals established," Nehls said. "We are moving from a subjective view of the verification process to a quantitative view."

There is a name for this approach - metric-driven verification (MDV). Nehls defined it as "the notion of applying rigorous criteria to metrics from multiple sources and managing them to completion." A critical aspect of MDV is starting with a verification plan that considers the goals of the verification process, the key features and functions that need to be verified, and the approaches that are needed to verify a particular function. Then teams can execute the verification, collect coverage metrics, and bring the results back into the verification plan in a single view. Teams can apply different metrics for different parts of the verification effort, and can use both simulation and formal analysis.

Nehls spoke about the different types of metrics and where they are best applied. Code coverage, for example, is a measure of how well the RTL code is exercised, and isn't very useful until there is a robust testbench and thorough functional coverage is already underway. This occurs during the IP verification phase. Later on, in SoC level verification, a type of code coverage called "toggle coverage" (which tracks activity on signals) is very useful for verifying integration and connectivity.

Assertions can be used much earlier, starting with the block-level verification done by designers. The same assertions can then be reused by verification engineers during IP verification and SoC level verification. Assertions can be used in both simulation and pure formal verification. Formal, assertion-based verification is very useful for certain types of blocks, as Nehls described.

Functional coverage can be used by designers for block-level testing, but these "smoke" tests are not generally reusable. The "sweet spot" for functional coverage is during IP verification with constrained-random simulation. At the SoC level, functional coverage can be used to find integration-level bugs as well, but here the testing is more directed and scenario-based.

A Closer Look

Nehls went on to provide a lot of detail about code coverage, assertion coverage, and functional coverage. He identified four types of code coverage - block, toggle, expression, and finite state machine - and explained their description, advantages, disadvantages, and use cases. Overall, code coverage is complementary to functional coverage and is "necessary but not sufficient" by itself. It does not ensure that functionality is completely covered and cannot detect missing features.

Assertions, Nehls noted, can be used in a number of ways "that do not require you to be a formal verification expert." One way assertions can be used is to find unreachable code, which can save a great deal of wasted effort. Nehls explained how assertions can be used in both formal analysis and simulation, and how these two verification modes can be used together.

Nehls showed how functional coverage verifies the functionality of the design, and how it's implemented with coverage constructs in testbench code, with examples in both SystemVerilog and e. He also showed how to build a meaningful coverage model, which is essential for success with functional coverage. Like code coverage, functional coverage has some limitations. There is no automatic way to check that the coverage model is correct, and there is still a possibility of not exercising some parts of the HDL code.

Bringing it All Together

Finally, Nehls showed how an executable verification plan can bring all the metrics together into a single view. One tool that provides this capability is the Incisive Enterprise Manager. Said Brennan: "Metrics do matter, and being able to roll them up in a reasonable way, where you can see all aspects of all metrics in one spot at one time, is really critical for the overall verification process. It really provides a quantifiable and undisputable mechanism or knowing when you are done." 

Want to learn more? View the webinar by clicking here.

Richard Goering

 

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.