Home > Community > Blogs > Industry Insights > user perspective what changes when socs move to 40 nm
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).


* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

User Perspective: What Changes When SoCs Move To 40 nm

Comments(0)Filed under: Industry Insights, DAC, SoC, IP Evaluation, AppliedMicro, Management Day, 40 nm, Khare

What are the "gotchas" as design teams move to 40 nm process nodes and below? The best way to find out is to hear from someone who's been there. At Management Day at the recent Design Automation Conference, Jitendra Khare, director of central engineering at AppliedMicro, presented the most comprehensive and informative list I've seen of the challenges that emerge as SoCs move to 40 nm.

While Management Day looked at both the technical and business challenges of complex SoCs, Khare's presentation, in a paper session I moderated, stayed on the technical side. (I previously blogged about a Management Day panel on which Khare and four other presenters appeared). Management Day was sponsored by Cadence.

Khare opened his presentation by talking about trends that are driving SoCs to lower process nodes, including multiple embedded cores, complex interfaces, smart power management, and cost concerns. The need to support a variety of applications with low-cost hardware is a key overall driving force.

But there are some things that "need to be done differently for 40 nm SoCs," as Khare said. Here are some of the challenges he cited.

Hard IP Procurement

There could be dozens of IP blocks on large SoCs, and many are "hard" IP blocks that have already gone through physical implementation. "One thing we notice is that timing corners just explode at 40 nm," Khare said. "You need to negotiate all corners in advance with IP vendors." The use of low-power design techniques causes the number of corners to increase even more, he said.

Khare noted that hard IP must also be compliant with design for manufacturability (DFM) requirements, and must be available for all of the metal stack options that may be employed in the SoC design (or, I would think, its derivatives).

Power Reduction

"The main thing at 40 nm is leakage power," Khare said. "You cannot underestimate the significance of leakage. It can kill your chip." At high temperatures, he said, leakage power can be twice the dynamic power at 40 nm.

So what can you do? For memory leakage, you can't do much about bit cells, but you can use high voltage threshold (HVT) decoders, memory sleep modes, and latch-based RAMs. Standard cell leakage can be controlled with HVT cells, but there's a performance tradeoff. Another possibility is using a 50 nm cell library. This improves leakage and timing performance, but as you'd expect, there's an area penalty.

Test and Reliability

Khare talked at length about this topic. Key points include:

  • At-speed scan is an "absolute requirement" at 65 nm and below
  • Hard IP blocks need to be on separate scan chains
  • BIST must recognize power domains
  • Memory BIST should define the test algorithm at run time
  • Soft error rate (SER) increases at 40 nm and is higher for flops than SRAMs. Majority of RAMS need ECC error correction for SER
  • Negative bias temperature instability (NBTI) increases at 40 nm and must be accounted for as a design margin

Package Design

Main message here is that this is a very important consideration, especially with the current push towards low-cost packaging. You need to simulate every package design for power and signal integrity. On-package capacitors are becoming necessary. Packages must be designed for current surges due to at-speed scan tests.

Signal Integrity

Khare noted that high-speed interfaces require chip/package/board co-simulation for signal integrity effects. And hard IP blocks must be simulated as well. "You cannot trust what the IP vendor is telling you. You have to do simulations in house," he said.

Why Bother to Move?

With a list like this, you may wonder why people don't just stay at 65 nm or 90 nm (or even 130 nm or 180 nm, which are still commonly used for analog design). Many design teams will, for now. But die size, multi-function performance, and unit cost requirements will drive an increasing number of design teams to move to 40 nm and below. To make it more practical, we do need to get SoC development costs under control. The recent EDA360 vision paper has some suggestions for that.

The good news is that design tools and libraries are ready for 40 nm, and that there's good information available from those who have paved the way. This Management Day talk was one example.

Richard Goering



Leave a Comment

E-mail (will not be published)
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.