Cadence has moved from traditional methods and product offerings for silicon
test in favor of a new direction, which answers the title question.
In 2008, Cadence recognized that while the Encounter Test product had outstanding
quality of results, ease-of-use was lacking. What was
perhaps most important was the recognition of a rapidly shifting
design-production paradigm driven mainly by silicon process
phenomena. With quality and optimization of production cost remaining
R&D's highest priority, Cadence began responding more
aggressively to market demands for greater productivity,
predictability, and profitability.
These factors drove the Encounter Test product family into a new
direction. The result? A front-end logic design strategy that
truly recognizes new and emerging SoC design paradigms, and most
importantly fuses logic and DFT design into a single synthesis
optimization environment offering correlation with downstream physical and test
design flows. That's right -- a true, single logic synthesis environment for
analyzing and optimizing test mode logic.
Just another essential operating mode
This new product direction reflects the realization that with
transition-based quality requirements, test mode logic is simply another
essential operating mode for the SoC -- and certainly no less important or
less critical in terms of Quality of Silicon (QoS). Why is this
realization so important? Simple ... with increasing process parameter
variations, the need for advanced transition-based fault modeling has
driven at-speed test methodology and architectures into the forefront
of logic design. What does this mean?
It means DFT architecture can no longer be a post-logic or post-physical
design care-about. With at-speed test mode logic, including
BIST, silicon test represents significant changes to
logical and physical design flows. The truth is that
logic synthesis drives two downstream interdependent parallel flows that lead
to silicon. It is rationally arguable that these flows will
not remain ones of distinction as greater integration continues to
drive traditional test methodologies into extinction.
As many are aware, transition-based testing results in the
expansion of pattern volume and cycle time. Therefore, to optimize
production cost, the DFT architecture must now enable efficient and effective
test data compression to mitigate the cost impact of achieving true
Quality of Silicon (QoS) and profitability. And, as you'd
expect, every relevant solution unveils or segues into a new
challenge. For example, higher, more efficient compression
architectures can introduce physical design congestion. Therefore,
physically-aware compression becomes critical.
Transition-based testing also requires the need to expand
the definition of quality of silicon (QoS).Previously defined
by area, timing, and power, QoS must now include testability
as a 4th quality parameter. This expanded QoS is
necessarily accompanied by a predictable test methodology -- one that
includes expanded concurrent optimization and,
importantly, correlation with downstream physical and test design flows
for predictable test.
Linking to downstream flows
To avoid netlist iterations and impacts on the physical design flow, the
logic synthesis environment must provide links to the downstream test design
flow to achieve meaningful correlation and convergence across
both physical and test design flows. Key predictable test goals
include "testability," power, and compression. One can reasonably
argue that "testability" is not limited to fault coverage.
Simply put, "testability" includes all factors necessary
for achieving a high quality design (QoS). For example,
it should certainly include early DFT logic infrastructure rule checking
and auto-fixing. This ensures that traditional downstream
bottlenecks during the test design (ATPG) phase are mitigated or
Power consumption during test is a component of "testability" and
it must be analyzed and optimized during logic design synthesis. As
mentioned, the process phenomena and related modeling issues have
complicated the ways in which test modes and patterns are designed and
managed. If you consider why compressed vectors may result
in more false failures than full-scan vectors, or why full-scan with
unmanaged capture power may result in otherwise unexplainable false failures,
the likely cause is voltage drop due to excessive power consumption. This
can also lead to reliability issues, field failures, and sub-optimal
If your current solution entails "overdesigning" the power
grid, then this can lead to sub-optimal production costs and
profit. A quality test design should include the ability to
manage power during test mode and more closely approximate functional
mode switching. Understressing the chip by simply filling bits
is not the answer. Simply reducing the switching activity may not
properly stress the IC design and can add significant pattern volume,
which defeats the cost of inserting compression.
Finally, predictable test and testability includes the early
determination of achievable compression ratios. Having the ability to do
"what-if" creation and analysis of compression scenarios before
beginning downstream physical and test design flows is key to productivity and
predictability. All of these new care-abouts should be resolved and
automated within a single environment that can process, analyze, and verify in
a productive, predictable, and profitable manner at the chip and block level.
And now you know the detailed truth.
And the answer? "Yes, they need each other and yes, we are
committed to delivering the solution." Thank you for reading and,
if it's meaningful to you, responding.