Simulation acceleration and emulation technology has been commonly used to run faster large blocks and system level configurations and to verify software against very fast and accurate RTL hardware model. With current system design capacities in the multi millions gates, simulating these designs at 100 -100,000 times the speed of a simulator provides already a huge benefit to system verification teams across the globe.
But is “running faster” the only metrics by which you measure acceleration/emulation benefit?
I think that acceleration speed will continue to be an important factor, but not the only factor moving forward. The true metric is how fast you can reach the “completion point” of your verification, in other words knowing that all bugs are have been “flushed out” before product is out the door. To accomplish this goal, you need to accelerate not only your simulation run, but your entire system level verification process.
While some of the most prevailing verification metrics considered in addition to acceleration speed have been fast compile and efficient debug, some new metrics need to be looked at: Are your runs on the accelerator and emulator getting you to the desired “completion point”? Do you apply the right tests to your accelerator and emulator resource to verify system level scenarios in the most effective way? Do you run your accelerator in the most effective way?
Verification acceleration towards your “completion point” entails good planning of your verification modeling strategy, effective management of your simulation and acceleration/emulation resources and use models, and a good verification coverage metrics telling you that your desired completion point has been reached. These make the main difference in my mind between “simulation acceleration” and well planned “verification acceleration” with the “completion point” end-goal in mind.