We’ve all heard that functional verification takes 60 to 70 percent of the design cycle, but what’s not often discussed is what, exactly, takes up all that verification time. For a growing number of design and verification teams, the biggest single bottleneck is the time spent diagnosing and fixing bugs. That also happens to be the part of the verification process that is the least automated, and the most lacking in any sort of formalized methodology.
Mike Stellfox, distinguished engineer at Cadence, has an interesting perspective on what’s making the debug bottleneck worse. The “problem” is that the coverage-driven verification methodology pioneered by Verisity, and now provided by Cadence, has made it easy to find lots of bugs. As a result, Mike said, engineers “are good at finding bugs to the point that they’re finding way more bugs than they can possibly debug.”
Prior to coverage-driven verification, Mike noted, verification teams used a lot of manual, directed testing. The primary bottleneck was the time it took to write all those tests. With automated pseudo-random test generation, it suddenly became possible to quickly generate tests and run thousands of tests in parallel. As a result, “there’s lots of automation for bug finding, but nothing significant is done to automate the debugging process,” Mike commented.
The debug cycle is actually fairly involved. According to Joe Hupcey III, director of product marketing for enterprise and functional verification and a frequent Cadence blogger, it includes these five steps:
- Analyzing simulation failures. A failure in simulation may or may not be caused by a design error – it could also result from an error in the testbench.
- Isolating bugs and identifying their causes. It’s possible that a number of different failures could have the same root cause and thus point to a problem that only needs fixing once.
- Fixing a bug. This usually involves changing the RTL code to alleviate the root cause.
- Confirming that the bug fix works. A test needs to ensure that the bug has gone away, and that new bugs were not created in the process of fixing it.
- Creating a regression test to ensure the bug stays fixed. The testbench must make sure the bug is not re-introduced.
Various point tools exist to help engineers with the debug cycle – including waveform viewers, source code debuggers, and transaction debuggers. But much of the work is manual, ad-hoc, and based on guesswork. There is no formalized methodology comparable to the Open Verification Methodology (OVM) for debug.
So what needs to happen? Mike sees the need for several improvements. One involves bringing debugging to a higher level of abstraction, with correlation down to the signal level. Another would improve multi-language debugging. A third imperative is to provide a consistent debug environment for simulation, formal verification, and emulation.
Cadence is looking into ways to automate the debug cycle. Specifics will have to wait for the future, but for now, what’s needed is an awareness of the problem, and an industry discussion around a growing challenge that’s so far been relegated to the sidelines.