Home > Community > Blogs > Industry Insights > 32 nm test chips show layout context matters
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

32 nm Test Chips Show Layout 'Context' Matters

Comments(2)Filed under: Industry Insights, IBM, HKMG, CMP Predictor, Stratosphere Solutions

Real silicon reveals a lot about how new silicon processes work, and two 32 nm test chips that Cadence recently completed on Common Platform high-k metal gate (HKMG) technology were particularly helpful from a modeling perspective. One conclusion: when you get to 32 nm and below, the “context” of a transistor – that is, what’s placed around it – has an increasing impact on variability.

Cadence announced July 29 first-silicon results from the Common Platform’s 32 nm HKMG technology. The test chips captured information related to device and interconnect variability, including systematic and random variations, as well as manufacturing effects such as lithography, thermal, stress, proximity effects, and copper deposition. The results are helping Cadence incorporate silicon-accurate 32 nm models and enhanced DFM support into the Encounter IC design tool suite.

The test chips were manufactured on an IBM “shuttle” and were part of a multi-project wafer (MPW) program that IBM launched for 32/28 nm in the third quarter of 2008. IBM made this program available to external partners earlier in the technology cycle than for previous technology nodes, said Joe Abler, senior technical staff member at IBM’s Semiconductor Research Development Center.

“We took some of the very early shuttles that were reserved for internal development and opened them up to key development partners, including IP and EDA vendors,” he said. “The intent is to give them early access to 32 nm HKMG technology, to learn its strengths and weaknesses, and be prepared to quickly enable the technology node in terms of IP models and tools.” While the first test chips were manufactured at 32 nm, IBM states that the same design rules are scalable to 28 nm.

Vassilios Gerousis, senior architect at Cadence, noted that two test chips were run. The first looked at metalization, and will be used to build a CMP model for the Cadence CMP Predictor. One observation, he said, is that the magnitude of the CMP variation is smaller than in previous process nodes, indicating that the Common Platform 32 nm process has good control over CMP thickness. However, variation in metal width is slightly higher, and design teams will still need to run a CMP analysis with accurate modeling.

The second chip was developed to examine random and systematic variability. Test chips don’t normally consider systematic variability, Vassilios said, but Cadence wanted to get an early look before standard cell libraries were available. This chip was developed in partnership with Stratosphere Solutions, a provider of IP and modeling tools for process characterization and modeling. One finding, Vassilios said, is that random variation has a higher magnitude than systematic variation at this process node, implying a greater need for statistical timing and statistical Spice models.

Jim Bordelon, president and CTO of Stratosphere, provided some further insights. He noted that Stratosphere played a role in creating transistor layouts and analyzing random and systematic variability. His distinction between the two is simple: systematic variability has a spatial dependency on the wafer, while random variability should show the same standard deviation regardless of location on the wafer.

The most important finding, Jim said, is that “context dependencies” have a significant influence on random variability at this process node. In other words, variability depends not just upon the characteristics of the transistor, but on what’s located around it – including active regions or other transistors. It matters whether neighboring devices are on the top, sides, or bottom. Thus, context dependency is not just a 1D effect.

“We have to capture this kind of context dependency in the modeling,” Jim said. “It just underscores the fact that characterization is important, and you need to find a way on the tool side to make it efficient to annotate your devices and your simulations to affect these different contexts.”

Cadence is currently working with leading-edge customers on 32 nm designs. The test chip collaboration between Cadence, IBM, and Stratosphere Solutions will help pave the way to 32/28 nm design for mainstream users.

Richard Goering

Comments(2)

By John Busco on October 7, 2009
I don't understand why these context dependencies are classified as "random" variability. Can't they be modeled deterministically if the variation is know to depend on context?
It's like interconnect coupling depending on "context" -- you model the coupling explicitly for crosstalk analysis. We don't treat crosstalk effects as "random".
John

By Jim Bordelon on October 9, 2009
To address John's comment, deterministic implies a model which produces a certain, unique result.  Random implies that behavior is probabilistic.  For example, the mean and standard deviation of a MOSFET threshold voltage, Vt, can be modeled as a function of size or other parameters of the environment (context), but such a model won’t uniquely describe the Vt of a particular MOSFET in a simulated circuit.  The simulation must choose a Vt value in accordance with the frequency of observed Vt values.  The root cause of the uncertainty could be that we don’t know the number of dopant atoms in the channel of the transistor.  The random component of variability (as opposed to a systematic component, which is described deterministically) can be dominant in many contexts.  In such cases, 3D physical models may not capture all relevant interactions (or their stochastic nature), necessitating more accurate empirical approaches based on statistical characterization.

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.