Home > Community > Blogs > Industry Insights > functional verification survey why gate level simulation is increasing
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Functional Verification Survey -- Why Gate-Level Simulation is Increasing

Comments(2)Filed under: Industry Insights, Palladium, RTL, verification, DFT, survey, Functional Verification, Incisive, Simulation, Debug, synthesis, STA, Cadence, debugging, Singh, equivalence checking, design for test, gate-level simulation, static timing analysis, scan chains, Dua, timing verification, GLS, X-state optimism, X state, zero delay simulation, reset verification, gate-level debug, gate level

In a recent webinar on increasing functional verification performance, the point was made that gate-level simulation usage is increasing. Wait a minute, I thought - haven't we spent the last two decades talking about raising the abstraction level for design and verification? While some IC verification teams are indeed moving up to software-driven verification and transaction level modeling (TLM), it turns out that there are increasingly compelling reasons to run gate-level simulation, as revealed in a recent Cadence customer survey.

As shown below, gate-level simulation is run after RTL code is simulated and synthesized into a gate-level netlist. Static timing analysis (STA) and logic equivalency checking are also run following RTL synthesis, but by themselves, these static verification methodologies don't cover everything. Equivalence checking, for instance, doesn't consider timing or detect X-state optimism (explained below).

One reason for running gate-level simulation is design for test (DFT). Because scan chains are inserted after the gate-level netlist is created, gate-level simulation is often used to determine whether scan chains are correct. Another motivation for gate-level simulation is that technology libraries at 45nm and below have far more timing checks, and more complex timing checks, than older process nodes.

Gagandeep Singh, Cadence staff R&D engineer, mentioned the survey at a Dec. 4, 2012 webinar on improving verification performance (see my recent blog review here). I spoke to Singh and Amit Dua, senior staff product engineer, to get some more details. The survey involved verification engineers from 7 major Cadence customers located in North America, Japan, India, and Europe. Process nodes mostly ranged from 28nm to 45nm (note: the Cadence Incisive verification platform supports 20nm as well). Respondents cited the top reasons for running gate-level simulation as follows:

  • 1. Reset Verification. Gate-level simulation can verify system initialization and show that the reset sequence is correct. Gate-level simulation requires a complete reset for the design.
  • 2. X Optimism in RTL. An RTL simulator may "optimistically" assign a 0 or 1 to a value that a gate-level simulator would identify as X (unknown). By so doing, the gate-level simulator can identify a mismatch that needs to be fixed.
  • 3. Timing Verification on Multi-Cycle/Asynchronous Paths. Static timing analysis can't identify asynchronous interfaces, and has constraint requirements that impact false and multi-cycle paths. Also, engineers may want to re-verify STA results in simulation.
  • 4. Basic Heart-Beat Test. Even though RTL simulation has already been run, some verification teams want to run a very limited "sanity check" to verify functionality at the gate level.

A separate question about DFT simulation revealed that about half of respondents use this technique to verify scan chains.

Survey respondents said that gate-level simulation may take up to one-third of the simulation time, and could potentially take most of the debugging time. While far more bugs will be caught in RTL simulation, Singh noted that "gate-level debug is far more complex and time-consuming than RTL debug." Unlike RTL batch/regression runs, Dua noted, gate-level debug runs cannot be enhanced by simply adding more compute power, since manual effort is required to debug problems.

When is gate-level simulation run? That's a tricky balance, because a bug caught late in the verification cycle is an expensive bug to fix. On the other hand, gate-level simulation isn't very useful until the RTL is reasonably stable.  "It cannot be done too early and it should not be done very late in the design," Singh said.

Speeding Things Along

Since gate-level simulation (especially with timing) runs much more slowly than RTL simulation, it potentially has a significant impact on the verification closure cycle. Thus, there's keen interest in speeding gate-level simulation. Applying more zero-delay simulation is one way to do this. The survey respondents reported that they're using more zero-delay simulation than timing simulation at the gate level.

Singh noted that zero-delay simulation is adequate for most functional verification, and that it runs 3-4X faster than timing simulation. All major simulators have some option for turning off timing, but different simulators provide different features. The Cadence Incisive Enterprise Simulator, for instance, offers delay mode control and built-in features that can help designers run zero-delay simulations more effectively. This is useful because zero delay mode can introduce race conditions into the design.

Incisive also offers a timing file that lets you turn off timing for particular instances in a design. And if you really need speed for untimed simulations, the Palladium XP accelerator/emulator can offer speeds 10,000 times faster than simulation.

Incisive also lets engineers provide limited debug access to certain portions of the design, so they don't end up dumping waveforms for areas they're not going to debug anyway. If full debug access is needed, a switch can provide it. There's also an option (-ZLIB) that can compress snapshots and save disk space, while letting users set the level of compression.

So in short, an old technology - gate-level simulation - is enjoying a revival as we move down the process node curve. New methodologies and faster simulation performance will be necessary to avoid creating a new bottleneck.

Richard Goering

 

 

 

 

Comments(2)

By SACHIN RAJ AGGARWAL on January 17, 2013
Until recently, designers have focused mostly on static, stuck-at-1 and stuck-at-0 defects. At 45nm, Vachon noted, delay faults begin to become important. At 28nm and 20nm delay faults dominate the defects that customers see. Delay faults (or transition faults) can result in "slow to rise" or "slow to fall" defects. "The tests required to detect those kinds of defects are complex, and they require at-speed test clocking," Vachon noted. "This drives the need for special test clocking IP during DFT insertion."

By Gaurav Jalan on January 17, 2013
Richard, I completely agree with you on this.For interested readers, a comprehensive list of need for GLS and relevant stuff is available below -whatisverification.blogspot.in/.../gate-level-simulations-necessary-evil.html

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.