Home > Community > Blogs > Functional Verification > a modest proposal using formal to close coverage gaps
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Functional Verification blog (individual posts).


* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

A Modest Proposal: Using Formal to Close Coverage Gaps

Comments(3)Filed under: Functional Verification, Formal Analysis, DVcon, formal, NextOp, assertion synthesis, assertions, coverage, formal verification, metrics, closure, Breker, BugScope, CVCIn my last blog post, I summarized some of our activities at DVCon and mentioned briefly the "Birds of a Feather" (BoF) panel and discussion on "Strategies in Verification for Random Test Generation: New Techniques and Technologies" held Monday evening. Today I'd like to fill in some of the details of this session and discuss the proposal that I made for a combined solution from Cadence and NextOp Software to close coverage gaps left by constrained-random stimulus generation. For background, I suggest reading previous posts on formal MDV, the role of coverage in formal and NextOp's "assertion synthesis" technology. You can also view Joe Hupcey's photos of the panelists in action.

Srinivasan Venkatramanan, CTO of CVC, set the stage for the discussion by describing the verification challenges faced by his customers, the role that random tests play, and the limitations of this approach. Then each of the other three panelists gave a short presentation with his perspective on the problem and possible solutions. I began by presenting the following diagram, which represents a simplified view of the evolution of the testbench:

Traditional testbenches used simple hand-written directed tests in which stimulus is applied to the design under test (DUT), with the simulation results from the design being compared to the expected results. The next step in evolution was randomization of some input stimulus, especially for data buses. This led to the constrained-random approach, in which all stimulus is derived from a set of testbench constraints describing the allowed behavior on the inputs. At that point, it was hard to understand what portions of the design were being verified, so coverage metrics were introduced to highlight unexercised functionality.

Coverage-driven verification (CDV) is the dominant advanced technique in use today and is well supported by the new Universal Verification Methodology (UVM) from Accellera. However, achieving 100% coverage by this method alone can be quite difficult; coverage tends to increase dramatically early in the project but flatten out asymptotically toward the end. I said that, from my perspective, there were three major approaches beyond CDV to try to close any coverage gaps. The simplest, and most established, is further automation of the CDV process by trying different random seed values, biasing the pseudo-random distribution of inputs, varying input constraints or using "soft" constraints as guidance rather than firm rules.

This approach is well understood, but not very effective and lacking any feedback loop. In contrast, I said that I had heard positive reports about the effectiveness of the "overlay" approaches, such as the graph-based technique used by Breker Systems to traverse the full coverage space more predictably. This requires a non-trivial effort to specify the intended behavior using a graphical format or language of some kind beyond the existing testbench constraints. Breker founder Adnan Hamid presented their approach later in the panel, stating that their customers found their descriptive format well suited for system-level behavior.

I focused on the middle approach in the above diagram, in which formal technologies are used to target the coverage gaps. Many coverage metrics, including code coverage, SystemVerilog cover properties and PSL cover properties, can be directly used as targets for formal analysis. This presents an alternative way to achieve coverage goals, and has the unique benefit that formal can also detect unreachable coverage. This is important to know since otherwise constrained-random simulation can run on forever trying to reach coverage that can never be hit with any stimulus. The only downside of using formal is that it is likely to be more effective in sub-blocks of the design rather than a full-chip DUT.

Users of Cadence Incisive Enterprise Verifier (IEV) get several novel benefits in this flow. First, IEV uses a combination of formal and simulation-based techniques and so can hit coverage, especially in a large DUT, that would be hard for a pure formal tool. Second, IEV can run simulation stimulus with all passive elements from the testbench included. This allows IEV to hit additional e and SystemVerilog testbench coverage that cannot be directly targeted by formal. Finally, all the metrics gathered by IEV are combined together with the results of constrained-random simulation so the verification engineer has a single, unified view of status.

However, as I pointed out on the panel, there is a gap in this approach. Constrained-random stimulus is generated using testbench constraints, while formal analysis requires formal constraints, which are derived from assertions defining the legal inputs for the DUT. Formal tools cannot read most testbench constraints and so a user who doesn't already use assertions will have to write at least enough of them so that formal considers only legal input values and sequences. I then suggested that the next panelist, NextOp CEO Yunshan Zhu, might have a suggestion on how to close this gap. Yunshan presented the following diagram:

Yunshan explained how NextOp's BugScope solution can fill the gap that I identified. BugScope analyzes the DUT and the behavior exhibited during constrained-random simulation and then generates assertions and coverage based on this analysis. The verification engineer can supplement the testbench with these assertions and coverage, in simulation or hardware acceleration, and can also fire up IEV. As I described earlier, IEV can detect and report unreachable coverage, target coverage directly with formal, and hit additional coverage in simulation. All of the resulting metrics can be collected and rolled up in Incisive Enterprise Manager together with the results from simulation and hardware acceleration.

By generating assertions and coverage from an existing simulation testbench, NextOp offers a unique link between the worlds of simulation and formal. The proposed combined solution from Cadence and NextOp has the potential to address coverage gaps without having to learn any new languages or develop any new models. Yunshan and I presented to the DVCon BoF audience and generated some good discussion with the audience. I'm presenting the proposal here in the hope that you will think about it, ask questions, make comments, and generate some ongoing discussion. I'm listening!

Tom A.

The truth is out there...sometimes it's in a blog.



By Anantharaj on September 21, 2011
Hi Tom,

Thank you for the blogs on Functional verification.

I have a basic question on generation of SVA & Coverages using a tool.

My concern is that this assertion synthesis may leave out a hidden bug in the design, by just reading the RTL & simulation database -> to generate the SVA.

What if there is a bug in the RTL & there were NO checkers associated with in the Constrained random environment?

This results in SVA generated by the tool, using a faulty RTL & a missing checker CRV environment simulations.

Please calrify on the same.

By tomacadence on September 23, 2011
Thanks for your excellent question. NextOp does not claim that its assertion synthesis can generate all assertions or catch all bugs. It is certainly possible that its assertions could miss a bug.
However, the presence or absence of checkers in the simulation testbench does not matter. NextOp’s tool reads only the design and the simulation results, so as long as the simulation does a good job of exercising the design then high-quality assertions will be generated. In particular, it’s quite likely that NextOp will generate some assertions that do not have any corresponding checkers in the testbench.
Tom A.

By Yuan Lu on September 24, 2011
If the RTL is buggy, but there is no checker failure, there are two possible reasons: 1. insufficient stimulus, i.e. the test vectors did not exercise the bug. 2. insufficient checkers, i.e. the bug is exercised, but not caught by a checker.
For case 1, the stimulus is still good. NextOp can generate high quality assertions and cover properties. In addition, the coverage properties may point out coverage holes that guide the user to add test vectors.

For case 2, a. NextOp assertions from neighbor blocks may trigger and expose the bug (sometimes an assertion trigger can point to a RTL bug in its neighbor modules) or b, NextOp may generates a property that "does not make sense" to designer, but in reviewing the properties, the bug is exposed.

We've seen many real life examples of case 1, case 2a and case 2b.

Hope this helps.
Yuan Lu, NextOp

Leave a Comment

E-mail (will not be published)
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.