Home > Community > Blogs > Functional Verification > all i really need to know about mdv i learned from hollywood part 2
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Functional Verification blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

All I Really Need to Know About MDV I Learned From Hollywood - Part 2

Comments(0)Filed under: Verification IP modeling, MDV, vPlan, metric-driven verification, verification planning

My last blog entry began a series using quotes from Hollywood movies to illustrate some of the key concepts about metric-driven verification (MDV). Given that this idea was inspired by a rather strange dream, I'm pleased to report that the feedback has been very positive and that I didn't seem to creep anyone out. So I'll forge ahead: last time I dealt with the planning phase and this time it's on to the "construct" and "execute" phases. As a reminder, here's the Cadence MDV flow:

 

 

 

Building an efficient, effective, and reusable verification environment is the goal of the "construct" phase. I use the term "verification environment" not just to make it sound impressive but also to draw a clear contrast with a traditional "testbench" running hand-written directed tests one by one. A modern verification environment will have at least these key attributes:

  • Constructed of hierarchical building blocks following a common methodology
  • Common structure, communication and messaging across all the blocks
  • A clear separation of test and testbench so that multiple tests can run on the same setup
  • Constrained-random stimulus generation, possibly supplemented by some directed tests
  • Collection of metrics, especially functional coverage, to assess verification thoroughness

Fortunately, the imminent Universal Verification Methodology (UVM) standard from Accellera goes a long way toward helping build such an environment. The UVM defines an open-source building-block library that enables easy construction of such components as stimulus generators, protocol monitor, and results checkers. The library is accompanied by a comprehensive reference manual and a detailed user guide showing how to put the pieces together. A UVM book is also available to provide further assistance.

If the design being verified uses any standard interfaces, it is quite likely that UVM-ready commercial verification IP (VIP) is available. An off-the-shelf component provides both stimulus and checking compliant with the protocol, saving time and making it even easier to set up the verification environment. It also provides all the appropriate protocol metrics, both checks and coverage, to collect for the MDV flow. Of course, in the other (non-VIP) components the verification team must define their own appropriate metrics.

Building a standard UVM environment has any advantages. Using the building blocks and following the user guide's recommendations results in consistent verification components and enables project-to-project (and even company-to-company) reuse. Further, the availability of a standardized environment is a big incentive to make the leap to MDV. The barrier to entry is much lower and the engineers are secure that what they learn will also be reusable. As the oft-misquoted line from Field of Dreams says, "If you build it, they will come." True indeed!

The "execute" phase of MDV is certainly the easiest to understand: run verification as fast as possible. Historically, this has meant running lots of simulations of the verification environment in parallel on a server farm, using different constraint values, randomization seed values, or configuration options so that each server is exercising the design differently. Each simulation run gathers the VIP-provided and user-defined metrics, storing them away in a database for analysis during the fourth phase.

One key aspect of the Cadence MDV solution is that the run phase is not limited to just digital simulation. Metrics are also gathered during mixed-signal simulation, formal analysis, fusion of simulation and formal technologies, hardware simulation acceleration, and even in-circuit emulation. No matter the engine, a common set of metrics is gathered and saved. My next blog post will discuss how these metrics are merged and presented for measurement and analysis. In the meantime, please comment and suggest some additional relevant movie quotes!

 Tom A.

 The truth is out there...sometimes it's in a blog.

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.