Home > Community > Blogs > Custom IC Design > ibm modeling team describes advanced soi qualification flow in cadence mmsim platform
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Cadence blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Q&A: IBM Modeling Team Describes Advanced SOI Qualification Flow In Cadence MMSIM Platform

Comments(0)Filed under: Custom IC Design, MMSIM, APS, Spectre, SpectreMDL, Spice model verification, BSIMSOI, SOI, model qualification, IBM, Compact Modeling Council, characterization, CMC, Monte CarloCircuits implemented using sub-micron technologies require designers to meet tighter and tighter specifications despite increasing statistical variation and complexity. High correlations between actual silicon and circuit verification using advanced SPICE models are therefore a must to ensure first pass design success. This characterization requires a high degree of cooperation and integration between modeling engineers and circuit simulation providers.

In this interview, members of the IBM modeling team talk about the verification effort that was initiated and completed last year between Cadence and IBM using silicon on insulator (SOI) advanced process nodes as a pilot. The goal between the two teams was to generate a robust, exhaustive and efficient model qualification flow using Cadence SPICE simulators (Spectre, Ultrasim, APS) specifically for IBM SOI process nodes. This SPICE verification flow was successfully implemented in MMSIM 10.1 and above.

The people involved in this work are:

  • Radhika Allamraju, IBM, started working with IBM's bulk models in 2003 and moved over to SOI a couple of years ago.
  • Carl Wermer, IBM, has been supporting design customers using IBM's SOI models. Previously he worked in process engineering and characterization.
  • Joe Watts is a senior technical staff member at IBM and currently provides guidance to compact model groups in SOI and bulk. He is also chair of the Compact Model Council (CMC).
  • Helene Thibieroz is a staff application engineer in Cadence Customer support and has been supporting Cadence customers for analog/mixed-signal products for 10 years.
  • Jushan Xie is a Cadence engineering director responsible for a simulation infrastructure that includes device modeling, simulation front end (SFE), MDL, and reliability simulation.
  • Natarajan Krishnan is a Cadence staff product verification engineer, leading product validation teams of MMSIM and AMS simulation products.

Q: Carl, Joe, can you please give us some background about modeling qualification at IBM and the flows and procedures you are using?

Carl: FET device modeling qualification at IBM starts from a new release of a compact model, such as BSIMSOI4.31 from U.C. Berkeley.  Senior device modelers at IBM are part of the CMC [Compact Model Council] which supports the industry review, testing and adoption of a new compact model code.  IBM and Cadence are both part of the CMC, and we started working together as part of this organization.

After a new and/or updated compact model is accepted by the CMC, it is implemented by IBM in our internal simulator, and the model is fit to proprietary device data or targets. It is also implemented in the Cadence common model interface and used by all MMSIM simulators including APS [Advanced Parallel Simulator], Spectre, and Ultrasim.

Engineers use both internal and vendor tools to extract the fitting parameters that support matching between the compact model and silicon.

After the model is built, testing over a range of conditions is done to confirm the fit is correct and self consistent.

Joe: For C based models we have learned there is a need to test the analytic derivatives from the model against finite difference derivatives calculated from the primary outputs of current and charge.  For BSIMSOI, we put a great deal of effort into this and fixed a large number of errors.

In a Verilog-A model the compiler generates the derivatives automatically, so this is not necessary.  In general the CMC requires verification to make sure that a new model or new features in an old model can match hardware measurements, and that it runs and converges for some sample circuits and gives reasonable circuit results.  IBM does such tests for enhancements we request, and other companies do them for enhancements they request.  Normally the circuits are not shared with the CMC because of IP contained in the circuits.

Q: What simulation challenges are device/modeling engineers facing with BSIMSOI, especially when dealing with advanced process nodes?

Carl: We have internal and external customers who use Spectre and other commercial simulators. So, after a model is generated for a given technology, we still need to check that the model implementation and results are consistent across many vendor simulators.

To ensure consistency between simulators, we run an exhaustive suite of DC netlists. The goal is to monitor multiple outputs (such as Ids, QG, cgg, gds) while varying process, temperature and voltage biases.  We also check simple circuit delay measurements at PVT corners.

In addition, we also have implemented a large number of features within the compact models to support parameter variability (process statistics) and allow designer and developers to predict process sensitivity. Those features need frequent updates and changes as more complex solutions are required to model the increased complexity caused by smaller and smaller process dimensions.  The same effort is also applied in Verilog-A for the passive models. 

We use different Cadence simulators. We first run Spectre to compare graphical outputs and check basic functionality. We then take advantage of Spectremdl scripts for more complex simulations (such as Monte Carlo, corners) and measurement capability. Spectre has proven to be extremely efficient in parsing all the model code we use and giving good run times, particularly for large Monte Carlo simulations.

Last, we compare a large set of outputs to ensure consistent implementation of our models across several commercial simulators (including Spectre) using Perl scripts and an internal IBM tool.

Q: Based on your experience, what are the simulation challenges that design engineers are facing with BSIMSOI?

Carl: I have seen the same pattern with a number of IBM customers making the transition from bulk to SOI.  There is confusion over what to do with floating body nodes and concerns over slower run times and some extra convergence challenges.  The structures for body contacted models are also an initial hurdle.

That being said, we now have a large number of design teams have cleared these early hurdles and are designing successfully in SOI. For convergence questions, we first look for problems with the compact models and the customer set-up, and then rely on the Cadence customer support team to take over.

Joe: For C based models we need to test the analytic derivatives from the model against finite difference derivatives calculated from the primary outputs of current and charge.  BSIMSOI 4.0-4.2 had serious convergence issues because of the derivative errors mentioned previously.  BSIMSOI 4.3 had much better derivatives and at least in our tests convergence improved.  Another hurdle for logic design teams moving to SOI is the history effect.  New teams need help understanding the effect and learning to design to accommodate it.

Q: What was the triggering factor that started this collaboration?

Carl: I have been working with Cadence on compact models on and off and since 1998. Regarding SOI, significant work investment did not start until 2000.  Customers who had been using Spectre with bulk models moved to SOI and wanted to continue to work with the Cadence Spectre simulator.  

Along the way, we have dealt with a number of different issues related to different implementation of the UCB compact model updates in different simulators. These would typically be caught by our regression tests performed for each simulator.

One recent problem we addressed with the Cadence team started when a third party customer asked them for a bug fix of the Berkeley BSIMSOI compact model that we also used. Ideally, Cadence would have been able to just integrate a new release from Berkeley, with a new version identifier, including the fix, and the customer could have moved to that version. However, the update from Berkeley was many months off, and also included other code updates not everyone was ready to work with.

The follow-up meetings between Cadence and IBM tackled the obvious problem on how a vendor can securely and rapidly provide fixes in a code that is version controlled by another organization, while also allowing users to either apply the fix, or stay with the code they have already qualified.

After some review, Cadence proposed an elegant solution, where they now support an additional parameter in the model cards that allows us to include or exclude different bug fixes in the code from UCB. If a value was not assigned, it would just default to the original code, so there are no unexpected results and back compatibility is fully preserved.

This collaboration work also revealed that the regression testing done by Cadence did not fully match IBM set of regression tests. Cadence therefore initiated the effort of aligning their regression tests and verification procedures to match with IBM ones and worked closely with us to close the gap.

Q: Can you describe the different steps that IBM and Cadence took jointly to define a model qualification specifically for IBM SOI process nodes?

Carl:  In response to the problem where our testing flagged an unexpected change in Berkeley BSIMSOI code, the Cadence team stepped up and scheduled meetings to work with us and make sure their regression testing was updated to reflect our needs. During those meetings, both parties identified an exhaustive set of regression tests that would be used by both Cadence and IBM.

We then identified an IBM SOI process node technology file that could be used by Cadence R&D to ensure all parameters extracted by IBM are tested. We last identified how often those regression tests should be performed and how they would be communicated to IBM. One of the key issue was to make sure our large suite of model output measures for device current and charge, along with derivatives like the related capacitance and outputs like gm/gds, would be fully tested by Cadence. As a result, Cadence added more regression tests to fully match with IBM model validation.

Q: Can you describe the outcome and results? What was achieved through this model qualification process?

Carl:  We sometimes work on very tight schedules, where features in the latest compact model qualified by the CMC are required in the device models we give to our customers.   Since we need to distribute these models for multiple simulators, this puts pressure on the vendors we work with.  Knowing that Cadence regression tests now fully integrate our areas of concern is a big positive and significantly reduces overall SOI model validation cycle time.

Q: How is this qualification process going to improve IBM productivity and SOI process node accuracy?

Carl: There will be a reduction in the exposure that we would have with customers waiting for the latest model release, and not being able to work with it because not all the simulators properly support it.

Q: How do you think this is going to have a positive impact for designers? Which Berkeley version would you recommend to use?

Radhika: We are moving or have moved to using BSIMSOI4.31 for SOI 22nm and 32nm nodes.

Q: What was your overall experience working with Cadence team? Would you agree this team was awesome?

Carl: While not everyone can be awesome, we are fortunate that Cadence has its share of people who are awesome; very insightful, quick to help us when we need advice/guidance using Cadence tools, and also quick to recognize and respond when there are problems that requires changes at their end.

Q: How do you see this collaboration in the future? What would be the next step?

Radhika: Going into the next process node, we will be working with BSIMSOI4.4 and either bsim-img and /or bsim-mg.  There are significant changes in the multigate models and what we have learned working together in BSIMSOI2 through 4.31 will benefit both teams. We are also looking at moving to APS, as IBM and the IBM design teams we work with have already benefited from this simulator.

Helene Thibieroz

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.