Home > Community > Blogs > Industry Insights > acceleration and emulation why hw sw integration needs both
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Acceleration And Emulation – Why HW/SW Integration Needs Both

Comments(1)Filed under: Industry Insights, FPGA, Palladium, verification, Incisive, Simulation, Simulator, accelerator, acceleration, emulator, emulation, prototypes

Early software development on software virtual prototypes is a great capability, but at some point hardware/software integration requires the accuracy that only real hardware can bring. When that occurs, there are three choices - acceleration, emulation, and FPGA prototypes. Even though we are accustomed to seeing acceleration and emulation as almost the same thing, each of these three choices is different and each has a distinctive role to play.

I have these thoughts as Cadence rolls out the Palladium XP, a verification computing platform that unifies acceleration capabilities from the Incisive Xtreme product line with Incisive Palladium emulation, incorporating some of the strongest capabilities from each platform.  You can read the press release here, but in this blog I'll look at the larger story behind the announcement. Why put acceleration and emulation in a single environment? What role does either play in hardware/software integration? And how do we define "acceleration" and "emulation," anyway?

Defining acceleration and emulation

I put these questions to Ran Avinun, product management group director for system design and verification at Cadence. The answers are not as straightforward as one might think. Ran's answers were as follows:

  • In acceleration, the design is typically running on the hardware while a simulation testbench runs on the workstation. If the testbench is transaction-based, the result is transaction-based acceleration, or as some say, co-emulation.
  • In emulation, the entire design and verification environment is generally running on the hardware. In addition to the hardware in the emulation box, portions of the design or the testbench may also be running on external target hardware through in-circuit emulation.

You can see from the above that there's a fine line between these definitions. But there's still a distinction. With acceleration, you are speeding up a simulator, and you should have most or all of the benefits of the simulator. For example Palladium XP, which is optimized to accelerate the Incisive Enterprise Simulator (but can support other simulators as well), supports such features as executable verification plans (vPlans), metric-driven verification with OVM-based testbenches, pseudo-random test generation, and coverage metrics in acceleration mode. Metrics can be collected and analyzed by the Incisive Enterprise Manager.

Emulation generally provides a self-contained verification environment. It provides extremely fast, testbench-independent verification speeds - up to 4 MHz with the Palladium XP - and offers fast bring-up times. It lets you plug in real-world hardware and run real-world applications. Emulation does not, however, provide the stimulus randomization capability of simulation and acceleration, and coverage metrics are limited to the design since the testbench is ported into the hardware.

In the hardware/software integration process, therefore, you might start with a software virtual prototype, and then find you need more accuracy to debug hardware/software interactions or run a power analysis. You can first move some portions of the design into hardware while running the verification environment on a workstation. This is "acceleration."

The next step would be to move everything including the testbench into hardware, at which point you are running "emulation." Now you have fast execution speeds and the ability to directly run applications on target hardware. You can also run the environment in a hybrid (acceleration plus emulation) mode in which a design or testbench runs on the workstation while the emulator is connected to a target. A continuum flow that combines acceleration and emulation is thus an important feature.

What about FPGA prototypes?

Can FPGA prototypes replace accelerators and emulators? FPGA prototypes are, after all, much faster. But as Ran noted in a recent Q&A interview, bring-up time is unpredictable and accuracy is questionable, since the design may change as it's mapped to FPGA hardware. Design changes are often required because partitioning and clocking is not a fully automatic process. Turnaround time and debug may also be poor with FPGA prototype systems.

The diagram below shows how virtual prototypes, acceleration/emulation, and FPGA prototypes address hardware/software development and integration. Virtual prototyping is most useful for high levels of the software stack that have little or no hardware dependency. FPGA-based prototyping makes it possible to move further down the stack. Acceleration/emulation is the best solution for the lowest parts of the stack, which typically have a strong hardware dependency. Accuracy and short bring-up times, which are the strong points of acceleration/emulation platforms, are critical.

However, emulation and acceleration may also be used earlier in the process, while FPGA-based prototyping is used later in the process when most of the hardware bugs are already removed.

 

A verification computing platform that brings forth the best capabilities of both acceleration and emulation is thus an important tool for hardware/software integration.

Richard Goering

 

 

Comments(1)

By David Murray on April 27, 2010
I like the high-level diagram which shows the main technologies for HW/SW integration but for me another story emerges.  All of these environments have their own pros and cons which, with next generation chip design complexities and aggressive product cycles, probably all will be needed. The question will not be which of these technologies to use but how to use all of them together. The basic unit of HW/SW interface accuracy is the register so what we need to ensure is that all the models (e.g. firmware model, virtual, TLM, RTL, HVL models) are fully aligned.  In order to facilitate smoother HW/SW integration right throughout the design flow, we need register management tools like Socrates Bitwise (www.duolog.com/.../bitwise-register-management)  to keep the models aligned and reduce time spent in debug.
I think that acceleration and emulation are a good bridge between Virtual and FPGA prototyping but model management and centralization is key to the smooth transition between these technologies

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.