Chi-Ping Hsu is senior vice president of research and development for the Cadence Implementation Products Group. He is responsible for analog design and verification, digital implementation and signoff, mixed-signal design and implementation, physical verification, DFM, and PCB and package design. In this interview, he
discusses five significant technology initiatives underway at the Implementation
Q: What initiatives are underway at Cadence today?
A: Basically the initiatives are MPI [Metric Driven Productivity], GGI [GHz GigaGates], PFI [Power Forward], digital-enabled mixed signal, and SiP [System in Package]. With the exception of PFI, where what we did was very public, these are all product initiatives.
Q: What does MPI involve?
A: This initiative is focused on full-custom and analog, which is traditionally an area that’s been interactively driven. It is always challenging to see how the software you develop can impact the user’s productivity. Everyone talks about productivity, but the change in mindset is getting people to measure it. Unless we can measure things it’s hard to say if we’re improving.
A large full-custom design takes several hundred people, usually located in different geographical sites. Of course people measure project duration and resource requirements. The question is how to improve those. On the pure EDA software side, that can be as simple as measuring how long it takes to open a large database, or how fast your response is through the network. But it’s a lot more difficult to measure how effectively the user is using your tool. We are now trying to get customers to work with us as partners so we can get visibility into how individual users are using the tool.
The way we do it is a lot more sophisticated than what customers can do themselves with project management. We actually put some data mining capability inside the Virtuoso platform. You can turn on a switch and see how the user is using the tools and look at duration and command sets. When we talk about this, managers at customer sites get excited. They don’t have anything that’s automated to this level of granularity.
Q: What have you discovered through this process?
A: To our surprise, very few users are using our advanced features. The fact that we have 100 commands, and they only use 5 to 10 commands, gives them a great opportunity to get some training going and improve their productivity. If you have 300 people on a project, a 30 percent improvement in productivity means you can save 100 people.
Q: Mixed-signal design and verification has been around for a long time. What’s different about the new mixed-signal initiative?
A: The centerpoint is a real mixed-signal methodology, not “big A” or “big D” per se. Before, “big A” implied that Virtuoso is the cockpit, and “big D” assumed that Encounter is the cockpit. We’re trying to blend the two, and figure out what an infrastructure should be resolving across these two domains. Instead of talking about “big A or “big D” we need to figure out a common solution.
Until OpenAccess, databases have traditionally been separated. Power and ground representations are very different in the [analog] schematic world from the [digital] netlist-driven world. We’re trying to sort all these things out.
What we’re trying to do is on a large scale, like PFI. It encompasses functional verification, front-end design, back-end implementation, and analysis and signoff all together holistically.
When we do PFI projects, we find that almost every chip has analog components. People ask how they can get the analog portion of the design to understand CPF [Common Power Format]. We are working on CPF-based analog/mixed-signal simulation. That is unique. Before, if you had RTL simulation with CPF and you did co-simulation with Spice, the Spice simulation would core dump, because it didn’t understand power shutoff or unknown signals.
Q: What needs to improve with mixed-signal verification?
A: It’s a very difficult technical problem. Mostly the industry uses co-simulation – if you want to do something at the Spice level, you need two engines. If you model using behavioral analog, what guarantees that the behavioral model will match the Spice model? Our goal is to learn how to apply digital test coverage and verification methodologies in that [mixed-signal] verification environment. There is so much methodology on the digital side, but it is difficult to use digital methodologies directly on analog.
Q: What’s the aim of the GGI initiative?
A: It’s about digital IC integration as process nodes move from 40 nm to 32 nm and 22 nm. There is an explosion in complexity and process variation. When I talk to large SoC-based customers, they are asking me how to solve problems when they have 800 million gates, variable process parameters, and still need to turn things around in 3 to 6 months. By their estimates, if they run everything multi-threaded on thousands of servers, a single RTL to GDSII iteration will still take 28 days.
GGI is intended to look at those challenges and see what we can do from the software side. What next-generation architecture do we need to have? What hierarchical digital methodology should we have? It’s not only about throughput, it’s also about variability and high-frequency requirements. But I don’t believe the problem can be addressed purely through software. It has to combine a design methodology and IP reuse methodology with tools to achieve the kind of turnaround time people need.
Q: Cadence recently announced a decreased investment in the manufacturing side of DFM. What’s the current DFM strategy?
A: We have acquired a lot of DFM technology. From Clear Shape we have LEA [Litho Electrical Analyzer] and LPA [Litho Physical Analyzer]. From CommandCAD we have pattern recognition. CMP [chemical-mechanical polishing] from Praesagus is a flagship in the industry. All of these will be aggressively used and developed moving forward. The only part we will de-invest in is mask-level optimization.
Q: Power Forward was introduced some time ago. Is there ongoing activity?
A: Absolutely. We have recently added new members to PFI, and we now have 36 members. On the technical side, CPF 1.0 is in production. It provides a top-down methodology. CPF 1.1 is not only top-down, but also bottom-up, meaning that you can reuse IP with advanced low power techniques. Modeling for IP reuse with low-power techniques is extremely complex, and we have been working on it for almost two years. CPF 1.1 is now in an initial deployment stage and we’re getting feedback. We believe it’s far ahead of UPF [Unified Power Format].
The biggest problem [with power formats] is the consistency between all the different products. Within Cadence, we went through so much effort to make sure CPF worked across 10 or 12 business units. We created test suites. It must have taken hundreds of man years. I haven’t seen Mentor, Magma and Synopsys create any kind of task force to validate [UPF] or to have a test chip go out. Nobody talks about the flow, only about the format.
Q: What does the SiP initiative include?
A: Packages are very, very complex. With multiple die on the package, the complexity becomes extremely high. Depending on what kind of die you have – analog, RF, digital – there is suddenly another layer between the PCB and IC where decisions need to be made. Fortunately, we own PCB, we own packaging, and we own analog and digital. So we spent a lot of time making sure that you can do IC/package/board co-design. Today we have RF SiP and we have digital SiP.
SiP is not by itself a big market, but the combined capability is very important for large customers. It becomes a strategically differentiating capability for those key customers.
Q: Does the initiative include 3D ICs and TSVs [through silicon vias]?
A: We have been working on TSVs for two years. TSMC’s TSV test chip was done with our tools. We are working aggressively in that space, and we’re doing it based on the existing infrastructure, rather than building a new platform. We believe the approach we’ve taken is an appropriate and workable solution.