Most IC verification teams use code coverage as signoff criteria, but they often have limited information about unreachable code. A new "case-splitting" methodology, described in a recently archived webinar, shows how a technique based on formal analysis provides new insight into coverage holes -- while requiring no understanding of formal analysis.
The webinar is titled "Simplifying Code Coverage Analysis: Automatically Separating the Wheat from the Chaff." It was presented by Joe Hupcey, Cadence product marketing director, and Jose Barandiaran, senior member of consulting staff.
Barandiaran began the webinar by talking about the "dead code challenge." There are two causes of unreachability, he noted. One is that the code is "illegal" in the sense that it was never intended to execute - perhaps the code came with externally acquired IP and the system doesn't use that particular functionality. That's generally okay. The other cause is that the code is supposed to be functional, but it's unreachable. That is not okay.
Are Coverage Holes Reachable?
However, it is often difficult to determine whether coverage holes are unreachable. That's where the case-splitting technique comes in. It leverages formal analysis (using Incisive Formal Verifier or Incisive Enterprise Verifier) "under the hood" to determine if holes are reachable. It's an automated flow in which properties are automatically generated, and no knowledge of formal analysis is required.
The diagram below shows how it works. You pass a simulation coverage database to the formal engine, along with a reused, or newly created, simulation snapshot. You select either module or instance-level analysis, and identify the code coverage targets for the formal engine to analyze. The formal tool generates the properties and runs them. The tool then reports unreachable holes, and back-annotates them into the coverage database so you can go into a reporting tool later and analyze each one, determining whether they represent code that's supposed to be functional and thus needs to be fixed.
Barandiaran noted that there are some performance tradeoffs to consider. Out of the box, this flow will run without any constraints, and it will run uninitialized. That's the fastest approach but it may also miss some unreachable coverage holes. You can dial up the constraints and/or initialization and run more slowly, but hit more unreachable holes.
In one customer example shared during the webinar, an uninitialized run on a design with 40K state bits uncovered 773 coverage holes in 2.8 hours, with 7.6% of the holes unreachable. An initialized run found more unreachable holes (10% of 773), and ran for 35 hours. However, this run time was cut to 5 hours because the flow was distributed over a CPU network.
This example, and others presented during the webinar, shows that this case-splitting methodology is in use today by customers. "This has been exercised in the field. It's not just some lab experiment. It really works quickly and cleanly, and you get the data in a very straightforward manner," Hupcey said.
The webinar also included a demo, in which it took only a few minutes to find 33 coverage holes, of which 9 were unreachable. "If you have the tools already, this is really a no-brainer," Barandiaran said.
The webinar is available to Cadence Community members here (quick and easy free registration if you're not a member). If you already have the Incisive Enterprise Verifier, see chapter 5 of the user guide for an explanation of the case-splitting technique.
For those who want to know more, a paper from CDNLive! India 2011 details Freescale's experience with the case-splitting technique. It is available to Cadence Community members here. Look for session 1.6 under track 1, Silicon Realization: Functional Verification.