Home > Community > Blogs > Industry Insights > dac 2013 panel what s needed to fix timing signoff
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

DAC 2013 Panel: What’s Needed to “Fix” Timing Signoff?

Comments(0)Filed under: Industry Insights, ARM, SSTA, FinFets, parallelism, Power, Double Patterning, GlobalFoundries, timing, OCV, 20nm, STA, Cadence, Altera, optimization, variation, MMMC, timing signoff, static timing analysis, Tempus, timing closure, parallel processing, Devgan, Brian Fuller, TSignoff, power closure, statistical timing analysis, AOCV, signoff panel

Has timing signoff innovation become an oxymoron?  What happened and how do we fix it? That was the provocative title of a Cadence-sponsored lunch panel at the Design Automation Conference (DAC 2013) June 3. Panelists from ARM, Altera, GLOBALFOUNDRIES, and Cadence talked about the challenges of timing signoff and discussed what's needed to make it work better.

Timing signoff is a hot topic right now. Two weeks before DAC Cadence introduced the Tempus Timing Signoff Solution, which includes a massively parallel timing engine that can scale to hundreds of CPUs, optimizes as well as analyzes, and provides accurate path-based analysis that is fast enough to be practical.

The panel was moderated by EE Times veteran Brian Fuller, who recently joined Cadence as editor in chief (you can read his new blog here). Panelists were as follows, shown left to right in the photo below:

  • Dipesh Patel, Executive Vice President for Physical IP, ARM
  • Tom Spyrou, Design Technology Architect, Altera
  • Richard Trihy, Director of Design Methodology, GLOBALFOUNDRIES
  • Anirudh Devgan, Corporate Vice President and General Manager, Silicon Signoff and Verification, Cadence

"Timing closure is one of the largest poles in the design flow today, with the increase in multi-mode multi-corner timing analysis views, the lack of closure signoff tools, and increasing variation," Fuller said. "We find ourselves in the middle of a paradigm shift. We need a new injection of technology."

Here are some excerpts from the conversation.

What are the main challenges with timing signoff?

Patel - Timing closure is taking up a huge portion of the design cycle. Some metrics say it takes 60% of the time. Reducing time to closure is the big challenge we're trying to solve.

Spyrou - With static timing analysis, people want three things - capacity, run time, and accuracy. Hierarchy is one way to approach these goals, but its context-dependent models are only good for one design. As designs get bigger, memory capacity will become more of a problem.

Trihy - From a foundry perspective the main issue is variation. You need to be able to control the margins. I'm not seeing very effective solutions for that.

Devgan - There are three main issues in signoff. They are speed and capacity, accuracy, and closure in terms of fixing things. One thing the EDA industry has to do better is to use parallel hardware.

Is timing signoff really an oxymoron?

Spyrou - No, there has definitely been innovation in timing. But there are still big problems getting designs out.

Trihy - Solutions have been proposed for different kinds of OCV [on-chip variation] and timing analysis. I don't think it's easy for the customer to design with these technologies.

Devgan - Innovation happens either in the commercial space or in academia. In the commercial space there has only been one solution for the last 20 years. In academia, a lot of papers were written, but I think they got too focused on features like statistical timing analysis that were not mainstream.

Why statistical timing analysis (SSTA) didn't catch on

Patel - One reason statistical analysis failed is methodology. Trying to create SSTA models from libraries, and use them in the flow, was incredibly complicated. Since then we've tried different things, including AOCV (advanced OCV) which has some context dependency. We need to come up with something context independent.

Trihy - It's true, generating libraries for statistical timing was painful. There was a huge investment on the part of the customer. But variation is here and is getting worse. We'll see how AOCV can help deal with variation problems.

How parallelism can speed timing signoff

Devgan - The hardware industry is going to massively parallel platforms. If you look at EDA, many tools are multi-threaded, which can only get you a low-level speedup. The maximum you can squeeze out is about 8 CPUs. If you redesign the algorithm rather than doing bottom-up multi-threading, you can scale much larger. This requires top-down parallelism rather than bottom-up multi-threading.

How has advanced node design changed the signoff challenge?

Trihy - Design flows didn't change that much down to 28nm, but at 20nm and beyond there are lots of new effects - double patterning, triple patterning, FinFETs - that really change the way you do design. Double patterning is a source of variation. With FinFETs, we are potentially seeing the Miller effect on steroids. The current drive is very high and the gate capacitance is larger.

Patel - We see the same effects - double patterning, the variability of metals. With FinFETs we've seen run times go up exponentially. You not only sign off IP at more corners, but time for each signoff is going up.

Spyrou - Because our fabric [Altera] is a regular structure, we don't have to depend on the accuracy of timing models. But at some point it would be good to have a static analysis tool that is "library-less" or that takes advantage of massive parallelism.

Devgan - I think accuracy will be a big problem. That's due to variability but it's also inherent when you drop the voltage to 0.5 or 0.6 volts, and things are no longer digital. The other thing is that the number of views is just absurd. OCV can solve some issues, but when there's variation with 100 corners, we have to handle things more intelligently than we've been doing.

The importance of optimization in signoff

Devgan - Signoff is not just analysis. I think signoff has to be able to fix, and fix in the right context. Having the same timer in place and route can help, but place and route is not signoff, it has different accuracy. The right answer is that signoff should have all the views and all the accuracy, and it should be able to fix in the physical context.

What's the connection between timing signoff and power?

Patel - For me, timing and power are linked. You can't have one without the other. We have to make sure any solution we deliver takes power into account.

Devgan - Timing and power should be done together from an analysis and signoff perspective. Right now I see power as mostly design techniques. Tools haven't contributed as much, and power analysis comes too late in the design cycle. The power number is pessimistic -- it's like a worst-case signoff number.

Spyrou - Delay calculation and noise analysis used to be separate, and now they're coming together with static timing analysis. Power consumption analysis and power grid analysis are still separate. In the future, I think power grid analysis will be part of timing signoff. Then eventually we'll bring in power consumption analysis, which is a harder problem.

If we hold the same panel two years from now, what will we be talking about?

Patel - I think we'll solve the MMMC [multi-mode, multi-corner] stuff, and the other thing I'd like to see is a new approach to OCV.

Spyrou - In the next couple of years enough people are working on memory scaling across CPUs that I think we'll see breakthroughs in that area.

Trihy - With AOCV or whatever method, I hope we have a solution. Other challenges may break the paradigm, if you look at FinFETs or double patterning.

Devgan - We just launched a tool [Tempus] and the next year will be very exciting for us. I think the industry has to solve the performance problem and the accuracy problem, and do the fixing in signoff.

Richard Goering

Related Blog Posts

Q&A: Anirudh Devgan Discusses New Cadence Signoff Strategy

Tempus - Parallelized Computation Provides a Breakthrough in Static Timing Analysis

 

 

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.