In my last blog post,
I summarized some of our activities at DVCon and
mentioned briefly the "Birds of a Feather" (BoF) panel and discussion on "Strategies
in Verification for Random Test Generation: New Techniques and Technologies" held
Monday evening. Today I'd like to fill in some of the details of this session
and discuss the proposal that I made for a combined solution from Cadence and
NextOp Software to close coverage gaps left by constrained-random stimulus
generation. For background, I suggest reading previous posts on formal
MDV, the role of coverage
in formal and NextOp's "assertion
synthesis" technology. You can also view Joe Hupcey's photos
of the panelists in action.
Venkatramanan, CTO of CVC, set the stage for the discussion by describing the
verification challenges faced by his customers, the role that random tests play,
and the limitations of this approach. Then each of the other three panelists
gave a short presentation with his perspective on the problem and possible
solutions. I began by presenting the following diagram, which represents a
simplified view of the evolution of the testbench:
testbenches used simple hand-written directed tests in which stimulus is
applied to the design under test (DUT), with the simulation results from the
design being compared to the expected results. The next step in evolution was
randomization of some input stimulus, especially for data buses. This led to
the constrained-random approach, in which all stimulus is derived from a set of
testbench constraints describing the allowed behavior on the inputs. At that
point, it was hard to understand what portions of the design were being
verified, so coverage metrics were introduced to highlight unexercised
verification (CDV) is the dominant advanced technique in use today and is well
supported by the new Universal Verification Methodology (UVM) from Accellera. However, achieving
100% coverage by this method alone can be quite difficult; coverage tends to
increase dramatically early in the project but flatten out asymptotically
toward the end. I said that, from my perspective, there were three major
approaches beyond CDV to try to close any coverage gaps. The simplest, and most
established, is further automation of the CDV process by trying different
random seed values, biasing the pseudo-random distribution of inputs, varying
input constraints or using "soft" constraints as guidance rather than firm
approach is well understood, but not very effective and lacking any feedback
loop. In contrast, I said that I had heard positive reports about the
effectiveness of the "overlay" approaches, such as the graph-based technique
used by Breker Systems to traverse the full coverage space more predictably. This
requires a non-trivial effort to specify the intended behavior using a graphical
format or language of some kind beyond the existing testbench constraints. Breker
founder Adnan Hamid presented their approach later in the panel, stating that
their customers found their descriptive format well suited for system-level
focused on the middle approach in the above diagram, in which formal
technologies are used to target the coverage gaps. Many coverage metrics,
including code coverage, SystemVerilog cover properties and PSL cover
properties, can be directly used as targets for formal analysis. This presents
an alternative way to achieve coverage goals, and has the unique benefit that
formal can also detect unreachable coverage. This is important to know since
otherwise constrained-random simulation can run on forever trying to reach
coverage that can never be hit with any stimulus. The only
downside of using formal is that it is likely to be more effective in
sub-blocks of the design rather than a full-chip DUT.
of Cadence Incisive Enterprise Verifier (IEV) get several novel benefits in
this flow. First, IEV uses a combination of formal and simulation-based
techniques and so can hit coverage, especially in a large DUT, that would be
hard for a pure formal tool. Second, IEV can run simulation stimulus with all
passive elements from the testbench included. This allows IEV to hit additional
and SystemVerilog testbench coverage that cannot be directly targeted by
formal. Finally, all the metrics gathered by IEV are combined together with the
results of constrained-random simulation so the verification engineer has a
single, unified view of status.
as I pointed out on the panel, there is a gap in this approach.
Constrained-random stimulus is generated using testbench constraints, while
formal analysis requires formal constraints, which are derived from assertions defining
the legal inputs for the DUT. Formal tools cannot read most testbench
constraints and so a user who doesn't already use assertions will have to write
at least enough of them so that formal considers only legal input values and
sequences. I then suggested that the next panelist, NextOp CEO Yunshan Zhu,
might have a suggestion on how to close this gap. Yunshan presented the following
explained how NextOp's BugScope solution can fill the gap that I identified.
BugScope analyzes the DUT and the behavior exhibited during constrained-random
simulation and then generates assertions and coverage based on this analysis.
The verification engineer can supplement the testbench with these assertions
and coverage, in simulation or hardware acceleration, and can also fire up IEV.
As I described earlier, IEV can detect and report unreachable coverage, target
coverage directly with formal, and hit additional coverage in simulation. All
of the resulting metrics can be collected and rolled up in Incisive Enterprise
Manager together with the results from simulation and hardware acceleration.
generating assertions and coverage from an existing simulation testbench,
NextOp offers a unique link between the worlds of simulation and formal. The proposed
combined solution from Cadence and NextOp has the potential to address coverage gaps without having to learn any new
languages or develop any new models. Yunshan and I presented to
the DVCon BoF audience and generated some good discussion with the audience.
I'm presenting the proposal here in the hope that you will think about it, ask
questions, make comments, and generate some ongoing discussion. I'm listening!
The truth is out there...sometimes
it's in a blog.