Home > Community > Blogs > Silicon Signoff and Verification > diagnosis of compressed test patterns what is best
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Manufacturability Signoff blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Diagnosis of Compressed Test Patterns: Several Things to Consider

Comments(1)Filed under: Silicon Diagnostics , Silicon Signoff and Verification, strategy for design-for-yield, Diagnostics DFM, ATPG, Scan test

Today, it is essential to put into place a strong methodology to identify sources of yield loss during manufacturing.  One widely accepted method involves diagnosing a representative sample of device failures during manufacturing test.  The failing results are aggregated, analyzed and a Pareto is created showing the highest frequency of failures from one or more of: cell, instance, net, test pattern, metal layer and layout topology. i.e. nets on M4 with more than five vias

This methodology is called volume diagnostics and basic information about it can be found at:www.cadence.com/newsletters/new_pdf/article3_diag.pdf 

There are many considerations when you deploy diagnostics including:

  1. On or offline diagnosis - do you diagnose failures in real-time or do you sample failures and diagnose them outside of production
  2. Number of failures in sample  - how many lots and wafers do you use to collect a sample, how many failing die are in the sample
  3. Diagnose scan test vectors or compressed test vectors  - in the case of the former, you diagnose directly from the failing scan registers and in the case of the later, you diagnose failures from output (signature) of the compactor logic 
In considering item three, the growing use of embedded compression techniques that are often used to lower both the volume of test data and subsequently testing time complicates both the diagnosis of scan chain failures as well as failures in the logic clouds between scan registers. Compression involves hardware placed on-chip that is used by ATPG tools to provide compressed test patterns.  In simple terms, compression involves a decompressor that fans out compressed scan data to all scan chains and a compactor that combines the outputs of all scan chains into a compressed signature. In these two elements, there can be combinational and or sequential logic depending on the compression architecture. In the case of Cadence's tools, we have support for XOR (combinational) and OPMISR (sequential) compression architectures.

With sequential logic being used in on-chip compression, this infers temporal data must be used by the diagnostic tool. When a failure is observed in the signature of the compactor the diagnostic tool needs to 'unrolled' the failing bit(s). It analyzes the data back wards in time to find the offending input pattern. This task can involve examining hundreds or thousands of clock cycles to get the actual scan bits that detected the failure.   This issue can be even further complicated when multiple failures exist in the scan chains and or in the logic clouds between scan chains. 

In the case of a pure combinational compression architecture, the 'unrolling' of time task is taken away as a diagnostic challenge but there are still added diagnostics complexities especially when there are a very large compression ratio. i.e. small number of inputs fanning out to many scan chains and then being compacted down to a small number of bits. In this case, failure aliasing is very likely. 

What can one do to assure the best diagnostics results?

The optimal solution for combinational compression is to run diagnostics in a single pass. However, when a OPMISR is used or when results are less than optimal, it is desirable to have some number of scan vectors available to run on ATE.  This vector set can be small in size as long as it provides effective test coverage on the order of 80% plus.  When performing on line diagnosis, this augmented test set can be quickly applied to a failing device to improve diagnostic results. The cost of doing a reload can be minimized by many factors such as ATE architecture and the use of multi-site testing. The small set of scan vectors can be quickly loaded on early failures during the testing of passing devices. Likewise, if the device is of an AMS type, then scan tests can be easily loaded onto failing devices during the long analog test sequences.In summary, it is important to plan your diagnostics methodology and be prepared with scan test for all situations when sequential compression os used or diagnostic results are less than satisfactory.

 

Comments(1)

By Ken on November 17, 2008
Good points but the data size of scan patterns is huge.

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.