As outlined in a prior post, new advances in formal and multi-engine technology (like Incisive Enterprise Verifier or "IEV") enables users to do complete verification of design IP using only assertions (i.e. no testbench required!) -- especially for blocks of around 1 million flops or less. Given this premise, it's natural to ask: "OK, but how does formal and multi-engine assertion-based verification (ABV) fit into the coverage and metric-driven work flow that I am (A) familiar with, and (B) know is effective in measuring my progress?" In the following series of blog posts, I'll answer these important questions. To spare you some suspense: conceptually, the coverage-driven verification terms and methodologies you are familiar with when writing testbenches and/or constrained-random stimulus in e or SystemVerilog - terms like "constraints'" "code coverage," and "functional coverage" -- have essentially the same meaning in the formal-centric ABV world.
Taking a step back, coverage collection -- regardless of whether you are collecting code coverage, functional coverage, assertion coverage, or simultaneously combining all of them together -- is a continuous process. Furthermore, at different phases of verification, you typically ask different questions depending on what insights you need from the coverage. Long story short: in a formal verification context, it all boils down to the following 3 primary questions:
- How good are my formal constraints?
- How good is my formal verification proof?
- How can I feel confident my verification is complete?
We will address the first question below -- the other two will be addressed in subsequent posts.
How good are my formal constraints?
First, consider the following 3 segment partition of the phases of verification illustrated in the figure below.
Figure 1: Phases of verification and coverage collection
Working from the top down of the block diagram, the first phase, "constraints development and design exploration," is quite simply the development of constraints on the universe of possible stimulus. As noted above, this is conceptually similar to a constrained-random dynamic simulation stimulus setup. However, in this process, the user describes valid behavior of inputs using Property Specification Language (PSL) or SystemVerilog assertion (SVA) properties. As you might expect, if the inputs you want to drive are for popular standard protocols (OCP, AHB, etc.) then standard verification IP components are likely available to give you a running start (Cadence offers multiple assertion-based verification components for this purpose).
Once your PSL/SVA properties a/k/a "constraints" are written, you apply the Formal tool to generate traces from these constraints and start running them on the design. Again, this is similar to running tests in simulation -- it just in this case it's the formal engines doing the work vs. an event-driven simulator.
Once engaged in the process, inevitably you will need to correct and refine your constraints as design problems/bugs become apparent. In particular, once you are up to speed it's clearly desirable to assess exactly how good or bad your constraints are. Specifically:
1 - Are constraints hiding some legal behavior ("over constraining")?
2 - Are constraints allowing some illegal behavior ("under constraining")?
Let's address these questions one-by-one.
Key Question 1: Are constraints hiding some legal behavior (Over Constraining)?
This question is answered by reachable coverage collection. In this process, covers (or "coverage points" - whatever term you prefer) are automatically generated by the tool, and then the tool goes on to generate a "witness trace" generated from your constraints (i.e. a machine generated "directed test", if you prefer) to see if all the lines of RTL design can be reached and all states of finite-state machines are traversable. These covers are called dead-code (for lines of RTL) and FSM state (for state of RTL finite state machine) checks in IEV. The best part (and the advantage of formal over dynamic simulation): because formal algorithms are mathematically exhaustive you can conclusively fail a dead-code check. For example, formal can tell you with mathematical certainty that there is no legal stimulus possible that could hit that line of code in RTL. Fortunately, the tools enable the user to debug such scenarios and fix this "over constraining." (In the next blog, we will provide details on debugging "over constraining"). The aforementioned FSM-state checks are similarly executed and reviewed.
If all dead-code checks and FSM-state checks pass, then we know that constraints are allowing sufficient traces to be generated. However, the constraints could still be too "loose" and allow some illegal traces to be generated. This issue is addressed by following question.
Key Question 2: Are constraints allowing some illegal behavior (Under Constraining)?
This question is answered during "Checks development and bug hunting" phase. In short, if your constraints allow some illegal behavior then your property checks will fail. In this case, modify the offending constraints to fix this issue. Fixing of constraints and DUT bugs are treated in same way during this phase. Of course your fixes could go too far and create an over constraining situation as described above. Thus it is recommended that one does reachable coverage collection on regular basis. To rephrase, both "constraint development and design exploration" and "checks development and bug hunting" phases participate in reachable coverage collection.
In summary, reachable coverage collection is done using automatically generated covers for dead-code checks and FSM-state checks. Formal, simulation and mixed engines participate in finding traces for these coverage checks, and they ensure that constraints allow sufficient legal traces to be generated. Conversely, the tool also uses the user checks to confirm that no illegal traces are being generated from the constraints.
Coming up in Part II of this series I'll address the next logical question in ABV coverage: "How good is my formal verification proof?" Until then, happy verifying!
On Twitter: http://twitter.com/teamverify, @teamverify