Home > Community > Blogs > Industry Insights > tempus parallelized computation provides a breakthrough in static timing analysis
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Tempus – Parallelized Computation Provides a Breakthrough in Static Timing Analysis

Comments(1)Filed under: encounter timing system, SSTA, Multicore, parallelism, static timing, STA, multi-threading, multi-core, Cadence, signoff, ETS, Molina, distributed processing, timing analysis, Tempus, parallel computing, parallelized computation, timing closure, parallel processing

Cadence this week (May 20, 2013) announced the Tempus Timing Signoff Solution, a new static timing analysis and closure tool that offers significant speed and capacity advantages over existing solutions. Tempus promises to accelerate signoff timing closure by a matter of weeks. One factor behind this fast performance is a clever application of parallelized computing technology to static timing analysis.

Granted, parallelized computing across multiple CPUs is not a new technology, and it has been applied to EDA applications such as parasitic extraction. But timing analysis is a much tougher challenge for parallelization, noted Ruben Molina, product marketing director for signoff at Cadence. With parasitic extraction, for instance, you can slice the die into quadrants and extract from each quadrant. Timing, however, brings in multiple input files, constraints, and logic cones that span logical and physical hierarchies.

Many static timing tools, including the Encounter Timing System, use distributed processing for timing views. That's fairly clear-cut, Molina noted, because each timing view is independent. You're not subdividing a singular problem - you're seeing versions of the same design under slightly different conditions or operating modes.

Check the Boundaries

The harder problem, and the one Tempus has tackled, is to take a really large analysis problem and decompose it into smaller problems that can run in parallelized fashion. "The challenge is in creating the next level of computational problems," Molina said. "Tools today are bound by either the logical or physical hierarchy of the design. It is a natural partitioning of the design that allows multiple design teams to work independently at the block level.  The challenge is what you do with timing at the boundaries of the blocks when you assemble the design."

The "secret sauce" in Tempus, he said, is the ability to use an arbitrary number of specified compute resources to perform parallelized computation with maximum efficiency. "Tempus figures out how to parallelize the timing analysis problem, and it doesn't necessarily have to align with either physical or logical boundaries. It's looking beyond these simplistic views of hierarchy." Defining the subproblems, managing the compute load, and integrating the final answers is all done automatically such that the "user doesn't even see it."

What happens if you take the same design that was previously parallelized, and run it in one monolithic process - assuming there's a compute resource that's large enough? According to Molina, you will get exactly the same answer. This has not always been the case when attempts at solving the timing analysis problem with distributed processing were tried.

Parallelized Computation and Multi-Threading

Some EDA tools (including Tempus) are multi-threaded, and can thus take advantage of a single machine with multiple CPUs. Multi-threaded timing analysis is actually an easier problem to solve than massively parallelized computation, Molina said. With multi-threading, everything is localized with respect to hardware and communication. With massively parallelized computation, data and results must have a higher degree of independence in order to scale with increasing number of compute resources.

Because Tempus is both multi-threaded and massively parallel, it can run on multiple multi-core compute resources. While there is no theoretical upper limit on the number of CPUs, Cadence has run Tempus across as many as 64 CPUs.  At 32 CPUs, Tempus has analyzed tens of millions of cells in a single hour. Parallel computation also lowers the memory overhead. The observed memory footprint of Tempus allows design sizes in the hundreds of millions of cells with today's high end compute server configurations.

It all adds up to faster timing closure. As noted in today's announcement, the Tempus Timing Signoff Solution is an integrated closure environment that can not only run timing analysis, but fix problems in the layout. But the analysis takes up the majority of the run time. Since the timing analysis is both multi-threaded and massively parallel, the overall optimization environment is much faster.

Because signoff timing closure can take up to 40% of the overall design flow, anything that substantially speeds it up will make a big difference. That's why the Tempus Timing Signoff Solution is a breakthrough technology for static timing analysis.

TSMC has certified the Tempus Timing Signoff Solution at 20nm (read press release here). To learn more about Tempus, click here.

Richard Goering

 

Comments(1)

By ch prashanth on May 23, 2013
It is really a great thought to do timing analysis using multi-threading concept.

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.