When Intel engineers were asked to verify one of the company's largest Many Integrated Core (MIC) designs, they faced a quandary. On one hand, they wanted the visibility and debug features provided by their Specman e language simulation environment. But they also wanted the much faster speeds provided by emulation. A recorded presentation at the Cadence web site shows how they were able to accomplish both, using essentially the same code base.
The presentation, titled "Common Specman Based Simulation and Emulation Solution," was one of 30-plus customer and partner presentations at the EDA360 Theater in the Cadence booth at the Design Automation Conference (DAC 2012). Audio recordings and slides from most of the presentations are located here, and a previous blog post lists the titles of the available presentations.
Blake French, MIC testbench architect at Intel, began the presentation by nothing that the Specman simulation environment provides strong visibility and debug features. But due to the size of the MIC design, simulation times were very long. Emulation is very fast but debugging is difficult. "We came to the conclusion that we wanted the best of both worlds," he said. "We wanted all the visibility and debug capability of our simulation based environment, yet we wanted the run times of our emulation environment. So our goal was to do that using the same source code."
Given that the simulation environment was primarily written in e, and emulation used C/C++ or Perl, this was a challenging task. Intel engineers wanted all the visibility, debugging and coverage features provided by simulation. They decided, however, that they would tolerate a 50% slowdown in the emulation platform if they could obtain all the visibility they wanted. After all, simulation runs in the Hz range, and emulation is many times faster in the KHz range.
French provided a very detailed description of how Intel transitioned a simulation-based environment to a verification environment that also runs in emulation hardware. The main point is that temporal activity needs to be stripped from the testbench and moved to the hardware itself. This is because temporal "while" loops - which may be in the hundreds or thousands in a testbench - generate a great deal of activity that the kernel is going to have to manage in software. Thus, if temporal statements are left in a software testbench, emulation will be slowed dramatically.
It's a simple concept but the implementation took a number of steps, as French described along with coding examples. Ultimately, a packet monitor handles procedural activities, and there are no temporal statements remaining in the testbench. Module binding is used to sample the hardware, and a dynamic event linker routes information to the testbench. Intel used the Direct Programming Interface (DPI), which is supported by most EDA vendors, to communicate between the hardware and the device under test (DUT).
But even though Intel was able to transition most of its testbench code to an acceleration compliant environment, it still wasn't quite good enough at first. With extensive profiling, engineers found that 15-20% of the testbench time was spent in the Specman kernel. That's because the kernel was still trying to schedule activity as if temporal constructs were present. Engineers were able to eliminate this overhead so that during emulation, the kernel is called for garbage collection only.
French shared some performance data at the end of the presentation. He noted that "emulation live" performance (where hardware and simulation are running in lockstep) ranges up to 60 KHz, while "emulation post process" performance (hardware running alone) ranges up to 120 KHz. "The bottom line is that we were successful in combining our e based simulation environment in a fashion where it can be used for both emulation and simulation, and for the most part we're using the same code base," he concluded.
If you'd like to see the details of how this transition was accomplished, you can "attend" the half-hour audio presentation here. There are some good questions at the end, so I suggest staying for the whole presentation.