Home > Community > Blogs > Industry Insights > q amp a a look at 20nm design challenges and solutions
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).


* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Q&A: A Look at 20nm Design Challenges and Solutions

Comments(2)Filed under: Industry Insights, lithography, Encounter, EDI, Double Patterning, Deokar, variability, 20nm, routing, design rules, clock concurrent optimization, Azuro, place and route, placement, Flex Models

The 20nm process node promises tremendous advantages in power, performance and design capacity, but also raises tough design challenges. These challenges include increased timing and power variability, complex layout rules, and incredibly large designs with massive amounts of IP. A major new challenge at 20nm is the requirement for extra masks (double patterning) to make existing lithography work at this advanced process node.

In this interview Rahul Deokar, product marketing director at Cadence, offers a Cadence perspective on the most pressing design challenges at 20nm and the tool capabilities and flows that will be needed to enable success at this node. Further perspectives on 20nm are available in a newly-published Cadence whitepaper, summarized here.

Q: What are the key advantages of moving to 20nm, and where are you seeing the most interest?

A: There are three primary reasons why we are seeing more system and semiconductor companies consider 20nm. One is the performance you can get, the second is the amount of transistors or IP you can put on the chip, and the third is lower power.

Within our customer base, we are seeing a lot of interest in the wireless space, which includes smartphones, tablets, and consumer devices. In this market you have to support different standards, the device has to be really fast, it has to have Internet access, and all this has to be done at lower power so you don't drain the battery. We're also seeing interest in 20nm in other segments like computing and graphics processors.

Q: Overall, what do you see as the primary design challenges at 20nm?

A: There are three kinds of challenges. One challenge is maximizing yield and manufacturability, and that involves really complex layout rules. As you go to 20nm there is an explosion in the different rules you have to deal with -- there are about 400 advanced layout rules for the metal layers. Additionally, double patterning comes into the picture.

The second set of challenges has to do with timing and power variability. Here the design might work, but not at the level of performance or power you intend. There is a lot of variability at 20nm. Metal pitches have gone from 100nm to 80nm and 64nm, and there is increased coupling between the wires. There are also more parasitics in device modeling because of the increased interconnect. You have more layout-dependent effects, where the proximity of cells near each other leads to variations in both timing and power.

The third big challenge is the very reason customers are moving to 20nm - to do really large designs. EDA tools must handle the design size and complexity that comes along with 20nm. That requires an ability to handle exponentially increasing IP and an entire SoC. Designers also need to do power management on entire SoCs and do verification signoff in a reasonable period of time.

Q: Who needs double patterning at 20nm, and on how many layers?

A: It looks like everybody moving to 20nm will need to use it, because conventional lithography is not cutting it any more. Layout features are completely disappearing because of lithography distortion, and it's not treatable because of the optical resolution limit. Double patterning gives a new lease on life of the existing [lithography] technology. The good news though, is that it does not need to be done for every metal layer. What most foundries and IDMs are experimenting with is the use of double patterning for the lower metal layers. For the higher metal layers, five and above, you might not need it.

Q: What design challenges are posed by double patterning?

A: There are a number of implications. One step that double patterning impacts is cell and library generation. You need to make sure silicon IP is compliant with double patterning layout rules. It is also very critical to account for double patterning during placement. We have a unique technology that does automatic colorized placement, and the end benefit is a less congested design.  With less congestion it is much easier to meet timing and power requirements.

And, the biggest impact is in routing. The double patterning has to be integrated inside the routing solution - it cannot be an afterthought where you finish the routing and then run decomposition. It has to be done correct-by-construction and that's our approach to it. We carry double patterning intent forward from cell and IP generation to double-pattern aware routing, and finally to signoff physical verification. This provides faster convergence because intent is carried forward throughout the flow. A second benefit is better quality of results.

Q: You mentioned complexity. What kinds of transistor counts do you expect, and how can EDA tools help?

A: 20nm is expected to provide 8 to 12 billion transistors, so that's a huge increase in the size of designs, and it's done with a 2X density shrink and 50% better performance. What's needed to handle such large designs is a unique abstraction technique. We've been working on something called Flex Models, which allows us to abstract out large design macros or blocks. We've seen an automatic reduction in the size of the netlist that needs to be handled, and as a result the design team can converge on a design much faster.

Q: Variability is already a problem at 40nm and 28nm. What gets worse at 20nm?

A: One aspect that gets worse involves layout-dependent effects. At 20nm cells are much closer to each other and the proximity effect of different kinds of cells and interconnects has a worse effect on both timing and power. Layout-dependent effects due to lithography and stress need to be characterized up front, and what's needed is context-driven placement and optimization.

The Encounter Digital Implementation system has the ability, during place and route, to determine how different cells are going to interact and how one layout configuration affects timing and power compared to another. It can choose the right neighbors to get better performance and power.

Q: Does the clock concurrent optimization technology (ccopt) recently acquired from Azuro have a role to play at 20nm?

A: It certainly has a very big role. We have already seen that the clock network is getting really complex at 40nm and 28nm. At 20nm many more clocks are introduced. People are gating clocks, there are power shutoffs, and there are many modes and corners. A traditional clock design methodology will just not cut it - you need a new architecture that has been designed from scratch.

In the traditional clock design methodology, clocks are treated as an afterthought. At 20nm you need clock design that is concurrent with the rest of the logic and physical design. You need to manage useful skew. These are things we do with Azuro [technology] and we get a much better end result in performance, power and area.

Q: What's needed in a 20nm design tool flow? Will a point tool approach work?

A: Point tools will not work. At Cadence we have two goals - one is to mitigate 20nm design risk, and another is to help customers accelerate 20nm designs. Either of these goals requires an end-to-end flow. Things like double patterning, clock design, and layout-dependent effects all have to be considered up front in the design flow, from IP characterization to placement and routing and final signoff.

Q: Is the Encounter Digital Implementation system ready for 20nm today?

A: It is definitely ready. We have been collaborating closely with our 20nm ecosystem partners for a long time, and we have engaged with them very early in the cycle - in fact we have helped them define 20nm technical specifications and interfaces. Right now we are doing multiple test chip tapeouts with our partners to make sure that our modeling, abstraction and flow will produce the best results. There's still some more 20nm work involved in moving to production, and there will be additional fine-tuning of our tools and methodologies, and we are going through that exercise right now.

Richard Goering

Related blog posts

GTC Presentation: Cadence Outlines Comprehensive 20nm Design Flow

Whitepaper Summary: How to Succeed at 20nm

Video: Easing the Challenges of Double Patterning at 20nm

Q&A: Samsung's Ana Hunter Offers Advance Look at 20nm

DAC Panel: 20nm is Tough, But Not a Roadblock

Double Patterning - A Double Edged Sword?



By mukesh on June 21, 2012
What is the compute requirements for a 20nm SoC vs a 28nm SoC? i.e based on design/process complexity, is there a good industry benchmark (2x,3x ...) to estimate compute resource needs?

By rgoering on June 21, 2012
I'm not aware of any such benchmark -- but you raise a good question.

Leave a Comment

E-mail (will not be published)
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.