Hardware-assisted verification - including acceleration, emulation, and FPGA prototyping - is no longer optional for complex SoCs. In this interview, Christopher Tice, corporate vice president and general manager for hardware system verification at Cadence, explains why. He talks about what's really important in hardware-assisted verification (hint: it's not just raw speed), traditional and new usage models, and the importance of a connected suite of development platforms such as the System Development Suite.
Q: Chris, hardware-assisted verification has been around for many years. Is it becoming more important and if so why?
A: Yes, it is becoming more important. There are three reasons. First, there is a secular shift in verification requirements as designs have shifted to advanced nodes, with an explosion in chip complexity. Second, there is a shift towards the integration of more complex IP, and hardware-assisted verification is the platform of choice for integration. Third, IP comes with software, and it needs platforms to run on. Again, hardware-assisted verification is the preferred platform.
Q: What is the traditional use model for acceleration and emulation?
A: Typically acceleration has been used early in the design cycle when you have a testbench-driven verification environment. What's important here is the reactivity of the verification solution. That means that the solution can be reliably compiled and turned around fast enough to keep up with the design team and design changes.
As the design matures, you shift from a verification-centric type approach to a system-centric approach. That's where emulation starts to become the dominant use case. In this case you are integrating the entire SoC or subsystem into the emulation environment, and running it effectively against the software stack. Perhaps the verification is software-driven. Another use model is in-circuit emulation [ICE], which occurs when the emulator is connected to the outside environment. Test streams may come from the real environment itself.
Q: What really matters when people are evaluating acceleration or emulation platforms? Is it mostly a matter of raw speed, or are there other concerns?
A: What really matters is the overall productivity of your platform. That's a function of the ability to bring a model up reliably inside the platform, and to use that platform to effectively debug and perform complex verification. Productivity is about predictability - you can predict when the platform is going to be up, and you can turn your model reliably. It's also about versatility - you need a platform that can be used early in the design process when you're in acceleration mode, all the way through to the end of the process for system integration.
Q: Hardware-assisted verification has become a competitive business. What distinguishes the Cadence Palladium XP from other products that offer acceleration and emulation?
A: We are the clear leader in the ICE marketplace - we were the inventors of ICE. Moreover, we have use model versatility. You can use the same platform you use for ICE in the verification environment, and you can also use it for acceleration, dynamic power analysis, or a hybrid solution of ICE plus acceleration.
We have the notion that Palladium is the verification hub. It has a variety of solutions and methodologies that ride around it. We have virtualized ICE, which we call in-circuit acceleration. Beyond that we have the hybrid use model, where you have a soft model running on a processor and a hard model inside the DUT [device under test].
Q: You mentioned the hybrid use model, which links virtual processor models to emulation. Can you say more about how it works and why it's used?
A: In traditional ICE, the entire design is inside the emulation box. This includes the CPU and applications processor or GPU. These all tend to run in lockstep around 1 MHz or so. That does give you a high-fidelity, accurate model, but it may take hours or tens of hours to boot an OS when you're running at 1 MHz.
The hybrid model takes an instruction-accurate model of the processor and connects it to the rest of the circuit through a transaction-based interface. So now when you're running CPU-based traffic, you're running it on a virtual model. Instead of running at 1 MHz you may be running at tens of MIPS, and sometimes up to 100 or 200 MIPS, perhaps 100X faster than the emulator. If you're running a CPU boot operation, you can run anywhere from 50-60X faster.
When you run things that are applications-centric, you're still running the actual RTL at emulation speeds, and you get full accuracy. The processor is running as a virtual model, and the details are running in RTL. That combination tends to result in a 10X-15X performance increase. The net is a big productivity gain for the software development and hardware/software co-verification teams. A side benefit is that you gain 30% to 40% capacity, since the virtual model is running on a workstation.
Q: Is the virtual model that you use in hybrid mode the same model that you'd use in a virtual platform?
A: Yes, and that is part of the vision of the System Development Suite. You have a standalone virtual platform based on the Virtual System Platform [VSP], you have a connection between VSP and the [Incisive] simulation environment, and you can get hybrid simulation models. Then you can port the RTL into Palladium and get acceleration on top of that.
Q: Cadence has also recently introduced an embedded testbench use mode. How does that work?
A: With the embedded testbench you are basically building a model for the entire board, and for the DUT and all of its interfaces. You create configurable models and test sequences that can drive stimulus through a processor. Effectively, you have a test processor that runs inside the environment itself. Customers who have complex platform interfaces are the most attracted to this use model.
Q: Hardware-assisted verification also includes FPGA prototyping, which Cadence provides in the Rapid Prototyping Platform (RPP). Where does that fit into the flow and how does it complement emulation?
A: FPGA prototyping has been around for a long time. In its current context, FPGA prototyping uses multiple FPGAs that are connected together. It takes time to build that model, and to properly match the RTL is quite difficult, so it tends to be used at the tail end of the design cycle.
At Cadence, we view FPGA prototyping as a complement to our mainstream verification solutions around Palladium and Incisive. One way we approach this is by enabling customers to build faster replicated models of Palladium. These can be used for regression tests or for software development.
Q: Finally, what is the future of hardware-assisted verification?
A: I think the future is really exciting, because we have found that hardware-assisted verification is no longer optional. This creates an explosion in the marketplace. Customers have insatiable demands because their designs keep growing and consuming more and more capacity.
Our strategy with the System Development Suite is to develop a suite of interconnected platforms. There is no single solution, but with the System Development Suite there is a series of platforms that solve the challenges of complex systems and complex software. We provide a series of open, connected and scalable platforms, from the Virtual System Platform to Incisive and Palladium and the Rapid Prototyping Platform, all connected through common flows.
Related Blog Posts
Designer View: New Emulation Use Models Employ Virtual Targets
Designer View: Embedded Palladium Testbench Speeds System Bring-Up
Palladium XP II - Two New Use Models for Hardware/Software Verification