Home > CDNLive > Silicon Valley > 2010

Welcome to CDNLive! On-Demand San Jose was the setting for CDNLive! Silicon Valley 2010, a jam-packed day featuring a morning of general sessions, networking luncheon with Cadence R&D, an afternoon of technical sessions with 6 tracks, and an evening of Cadence and partner exhibits. At the event, Cadence also unveiled its new, holistic approach to Silicon Realization, a key component of the EDA360 vision.

To access the 6 tracks of technical sessions, please log in with your Cadence.com account (corporate email required). If you don’t have an account, create one here.

See browser support below for playback of multimedia sessions.
Browse Sessions

*
Access to technical sessions requires logging in with your Cadence.com account (corporate email required).
 
Conference Opening View video In this dynamic opening to CDNLive! Shapeology performers depict the transformation occurring in EDA by taking the audience on a visual tour of where the electronics industry has been and where we are headed ...
in the future of Realizing EDA360.
Welcome Lip-Bu Tan, President and CEO, Cadence View video EDA's biggest technology challenges—HW/SW integration, IP optimization, and increasing software content—are driving the need for innovation, globalization, and ecosystem collaboration. Demand for killer ...
apps, continuous connectivity, mobility, and green design hold exciting possibilities for EDA. Market forecasts anticipate 32% growth for semiconductor next year, with numerous prospects in China and more frequent acquisitions.
Cadence Product Strategy John Bruggeman, Senior Vice President and Chief Marketing Officer, Cadence View video Revitalizing the EDA industry from a profitability standpoint requires shifting from a hardware-first approach to an application-driven business model. Get a high-level view of the Cadence product strategy for each layer ...
of the EDA360 vision. Highlights include an open standards-based ecosystem that delivers accurate models, a virtualization platform for model selection, and tools that optimize and verify IP for integration.
Cadence Silicon Realization Overview Chi-Ping Hsu, Ph.D, Senior Vice President, Research and Development, Silicon Realization Group, Cadence View video Productivity and predictability issues are making it crucial for engineers to optimize functional, electrical, and physical specifications concurrently rather than in the typical EDA silos. This close look into Silicon ...
Realization reveals three critical requirements: unified design and verification intent; higher levels of abstraction; and convergence of late-stage design/manufacturing data into the early phases of design.
Realizing End-to-End Mixed-Signal Design Dave Desharnais, Product Marketing Group Director, Silicon Realization, Cadence View video An in-depth technical discussion and demonstration on how the three key elements of Silicon Realization—-intent, abstraction, and convergence—can be applied to mixed-signal challenges and deliver an ...
end-to-end, predictable path to silicon success. Key concepts include analog behavioral modeling, design (power) intent for mixed-signal IP, analog/digital interoperability, and mixed-signal design closure.
Transaction-Based Acceleration: Strong Ammunition in any Verification Arsenal Varun Gupta, Cadence View Session | Download PDF RTL simulation runtimes are severely impacted by the verification requirements of today’s complex IC designs. Due to fierce market demands of increased functionality, serving multiple applications with the same core ...
design, and shrinking market windows, the challenge of completing the verification plan on time has increased manifold. Time-critical tests, with requisite scoreboard monitoring, take days of simulation runtime imposing protracted schedules on high-quality assurance milestones. Complementing verification with in-circuit emulation (ICE) and/or FPGA-based prototyping can provide much-needed relief in performance and the ability to verify with real-world directed stimulus and response, but exploration of the realm of deep corner-case bugs with random stimulus still remains. Both of these verification modes predominantly require full design synthesizability and availability of software drivers to achieve higher quality modeling. Hardware acceleration of the existing testbench not only allows the discovery of deep bugs by enabling any simulator to be accelerated while maintaining metric-driven verification methods and use models, but also serves as a unique bridge between simulation and ICE. This early-phase acceleration, which enables gradual movement of behavioral design blocks into the synthesizable domain, allows smoother transition into and faster bring-up of ICE, thereby improving overall productivity and reducing time to market.
Metric-Driven Verification Using TLMs Per Edstrom and Ajay Goyal, Cadence View Session | Download PDF Automated RTL verification relies on the existence of an independent “golden” reference model to ensure that algorithmic data transformation correctness is established. As per the current verification flow, design ...
verification happens at various abstraction level viz. algorithm model, TLM protocol stage, SC signal stage, RTL stage. Every stage has its own verification environments and there is very little reuse, which makes the verification effort huge and duplicate in nature. This paper shows that by creating a new metric-driven verification environment that first fully verifies the untimed algorithm using transaction-level modeling (TLM) and then later reuses the same verification environment for other abstraction levels, we were able to reduce the overall verification effort significantly. This approach provided a dramatic improvement in verification productivity and helped us reduce the time spent on RTL verification.
Bringing Power Analysis and Verification to the System-level Teams Maulik Patel, Cadence View Session | Download PDF As designers use advanced low-power techniques to reduce power, they inherently increase the complexity of the SoC for the power verification. System-level verification (functional and power) is required. ...
However, as design size grows, various EDA tool runtimes increase and the performance necessary to do the job is not available. Verification engineers have a narrow window to validate product requirements (for both functional verification and low-power verification). As a result, they typically focus on IP or short tests only. With aggressive time-to-market requirements, verification engineers do what they can, but it is frequently not enough. This paper describes an environment and an approach that can enable system-level verification by offering a high-performance verification computing platform, together with a methodology, to create system scenarios and run them on a prototype of the SoC, to analyze, test, and optimize the design’s low-power techniques.
TLM2 Modeling for OVM-Based Functional Verification Kevin Locker, System Silicon LLC View Session | Download PDF With the standardization of TLM2.0, the industry has taken a leap forward toward the ESL vision of a truly top-down chip design methodology. But visions often take unexpected turns on the path to full realization, ...
and the use of SystemC TLM2 modeling as a key component of a functional verification methodology may be one such serendipitous by-product of the ESL vision. An OVM testbench incorporating a TLM2.0 reference model has several demonstrated benefits: 1. TLM2.0 models are ideally suited as virtual prototypes for early software development; 2. TLM2.0 models are often available before a working RTL simulation model and enable earlier spec validation and testbench development; 3. As specs evolve, adding new features and modifying existing features can be as simple as updating the reference model without changes to other stimulus generators or checkers; 4. TLM2.0 models are a useful repository for checking and debug or reporting logic; 5. TLM2.0 models improve functional coverage planning and collection by allowing direct access to "virtual" DUT states without direct probing of the DUV. A chip-level SystemC TLM2.0 model was used as part of a metric-driven methodology to functionally verify the design of NXP’s PCU9668 I2C controller. This paper describes what was learned and some of the challenges inherent in this approach. As a stepping-stone to a top-down ESL methodology, this approach has many parallels that will be discussed.
Co-Development of Hardware and Software with IBM Rational Solutions Martin Bakal, IBM Rational Software View Session | Download PDF Rising product complexity and intense market pressures present major challenges to designers of electronic systems. To stay competitive, tight coordination between the processes used by software and hardware ...
engineers is critical to optimizing product quality while controlling costs and meeting aggressive timelines. This IBM Rational presentation discusses key elements of rapid co-development and explores how the use of shared processes between hardware and software development enable these teams to co-develop smarter products more efficiently. Once shared processes are established, common tools can drive the next level of efficiency. Integrations between key solutions from IBM Rational and Cadence form the bridge between hardware and software development. IBM Rational supports co-development of complex electronics in several ways: management of hardware and software requirements and their inter-relationships; upfront system modeling to determine which aspects of the design will be implemented in hardware vs. software; and common configuration management and defect tracking for both hardware and software development. IBM has successfully used these techniques internally across a 25,000+ user base to significantly reduce development costs, increase reuse, and improve quality. Integrating these solutions with Cadence solutions for hardware design provides an optimized environment for co-development of hardware and software.
Embedded ARM Software/Hardware Co-Design and Verification Enablement Barry Spotts, ARM View Session | Download PDF Software content in embedded designs is growing fast to meet consumer demand for capability, integration, and mobility. Embedded devices are being released to consumers at a faster rate and semiconductor design ...
times are shrinking, so software is becoming a significant aspect for system design and verification. Creating application software rapidly, debugging efficiently, optimizing performance and power, and completing hardware/software co-verification is more complex than ever. In this presentation, you will learn about the latest solutions from ARM and Cadence for ARM-based embedded and application software creation, debug, and functional co-verification; the ARM Fast Models are used to enable early software development and co-verification with Incisive SystemC simulation; the ARM VSTREAM running with the Palladium XP Verification Computing Environment enables accurate co-verification when the task requires 100% accurate co-verification with ARM cores and RTL SoCs; and both solutions sharing the widely used RealView Development Suite (RVDS) for software development.
TLM-Driven Architectural Exploration for LTE Multi-Core/Multi-OS SoCs Laurent Isenegger, CoFluent Design View Session | Download PDF Multi-core architectures are much more complex than single-core systems and have brought in new design challenges. Keeping the same established techniques for future designs would only result in ever-increasing ...
risks, costs, and time to market. That’s why architectural exploration must be performed as early as possible in the design cycle. The proposed solution consists of creating executable specifications based on TLM SystemC models of both software and hardware, automatically generated from graphical capture. This approach provides key benefits for early design-space exploration and low-power optimization. It can be fully integrated to high-level synthesis flows based on Cadence C-to-Silicon Compiler. Integration to UVM verification flows with Incisive tools is also available. Initially, the application and its testbench are modeled and simulated to functionally validate the system’s behavior. Then, this workload is mapped onto a platform including models of processors, buses, an OS, and hypervisors, and the simulation enables a quick what-if analysis for the different alternatives without relying on the availability of hardware IP models. An LTE application will be provided as example. Instead of taking months to develop handwritten SystemC models or set-up lower-level virtual prototyping environments, it only took a few weeks to create a model of the complete system and hence, perform the architectural exploration and optimization required.
Virtual Platforms for Embedded Software Verification Larry Lapides, Imperas View Session | Download PDF As the electronic systems industry shifts from design creation to integration, new tools and flows are needed, in particular for embedded software development. One of those new tools is software simulation, or virtual ...
platforms. Virtual platforms are a representation of the hardware platform that allow the complete software stack to be executed (simulated). The benefits of these virtual platforms are that the instruction-accurate models are relatively easy to develop and are available to the software development team early in the system or SoC development cycle. Virtual platforms also provide access for the complete development team, even in multiple locations. In this presentation, in addition to discussing embedded software development on virtual platforms, we will show how the integration of Incisive SystemC simulation, Incisive Software Extensions, processor models from OVP, and software simulation and verification tools from Imperas enables software functional verification. When the virtual platform is coupled with Incisive Software Extensions and Imperas software verification tools, software engineers can verify the functionality of code, such as drivers, in the context of the complete OS running on the platform. This software verification capability has not been possible until now. New technologies (fast simulation and verification from Imperas) and new flows (integration between Cadence and Imperas tools) have made this possible.
Innovative Power Management Techniques for Advanced SoC Design Frank Ferro, Sonics, Inc. View Session | Download PDF Consumer demand for mobile broadband products with ever-increasing functionality (video, graphics, VoIP, 3G, 4G) is driving up the complexity of the SoCs that power them. Even with all this functionality, consumers are ...
not willing to compromise on battery life. Therefore, the need for better power management at the SoC level is critical for consumer devices to succeed in the marketplace. Sonics will discuss how power-aware SoC designs must start at the architectural description and move down through the SoC to gate level for the most optimal designs. Included will be a description of various power management techniques along with the implementation tradeoffs in these approaches, such as clock gating, rapid power switching, atomic shutdown, and power handshakes.
Resolving the IP-SoC Tower of Babel Kurt Wolf, Silicon-IP, Inc.; Vikram Phatak, Silicon Enablers; Kenneth Wang, Silicon-IP, Inc. View Session | Download PDF Separate IP design views are delivered to separate SoC integration teams, and inconsistencies between these views and subsequent quality issues are not discovered until handoff at each stage along the ...
implementation chain. Delays from weeks to months are caused by determining whether problems arise from the SoC design or the IP function/design view and the resulting time required to obtain the fix. This session provides a best practice, repeatable methodology to create fully validated IP that meets specifications and is consistent among all IP view and design-flow interdependencies. Specific areas covered include: independent and combinatorial IP view compliance, validation of all design deliverables to claims made on datasheets and related operation margins, and verified design-flow integration. The best practice methodology includes procedures from industry experts in the fields of IP due diligence and license negotiation, silicon validation programs, IP/SoC integration, and IP/datasheet validation. A composite case study (to protect confidentiality) is used to demonstrate each step of the methodology. By implementing these best practices, SoC design teams save 2-3 months of IP evaluation, verification, and integration time and money.
Addressing the Increasing Need for Integrated Layout and 3D Full Wave
Analysis
Gerardo Romo, CST View Session | Download PDF As multi-band devices shrink and frequencies increase, layout of packages, SiPs, and PCBs is becoming increasingly critical to maintain good signal and power integrity and to meet radiated emission requirements. ...
Rule checkers can help to some extent, but electromagnetic field simulators are required to see the full picture, both in pre-layout analysis and post-layout verification. Most layout tools have 2D simulators that can offer good insights, but for high accuracy and to address layouts with non-planar elements such as wirebonds, a full 3D simulator is mandatory. CST has offered a plug-in to Cadence tools for some years and more recently a direct import has become available that offers component recognition, layer editing, and net and area selection. A major collaborative project this year enables Cadence layout engineers to stay within their familiar layout environments and perform a full wave 3D extraction and simulation in the background.
OVM-Based Verification of Analog IP and Mixed-Signal SoCs Hao Fang, LSI; Neyaz Khan, Cadence View Session | Download PDF Virtually all modern SoCs are mixed-signal in nature. The verification of mixed-signal designs is a daunting and time-consuming task, falling in roughly two categories: verification of analog IP and SoC-level mixed-signal ...
verification. Analog IP is typically both designed and verified by skilled analog designers, predominantly using the analog-schematic environment. The task of functional verification is based mostly on visual inspection with very little automation for checking functionality and results. Most systems have to interface their millions of digital logic gates, DSPs, memories, and processors to the real world through analog components like a display, an antenna, a sensor, a cable, or an RF interface. Verifying correct behavior of large mixed-signal SoCs using analog models in Spice or Verilog-AMS is a big bottleneck for verification with digital logic modeled as RTL. Traditionally, digital verification engineers have made assumptions about the analog components, and analog designers likewise have made assumptions about the digital behavior. This is a rich source of errors. There is a need to apply advanced verification methodologies and techniques from the digital verification realm to analog components, while also balancing the need for speeds required to verify digital components vs. accuracy needed to model analog components. This paper is based on pioneering work being done through a partnership between LSI Shanghai and Cadence. A prototype was developed to demonstrate the benefits of applying an OVM-based verification flow to the verification of a complex analog block that is part of a live project. The advantages of this flow and positive results will be highlighted. The presenter will share his experiences in: 1. Applying the OVM to the verification of complex analog IP blocks and 2. Applying digital-centric mixed-signal verification (DMSV)-based techniques to model analog IP and use in an OVM-based verification environment.
Data Abstraction of Interfaces: Pathway to Greater Verification Efficiency Ravi Venugopalan, Sonics Inc. View Session | Download PDF Verilog parameterized interfaces using logic members significantly complicate verification environments. In OVM, the agents and test environment are dramatically more complex when parameterized interfaces are ...
used. Verification complexity can be significantly reduced by using byte, shortint, int, and longint for members of a Verilog interface. By dropping bit-accurate representations of interface members, there is no longer a need to parameterize an interface. The following advantages will be covered in this session: 1. handling of configurable IP without the need for parameterized interfaces; 2. verification of connections with different data widths is automatically achieved; 3. less code required to work with abstract objects; 4. easier to see solutions to verification problems; 5. fewer bugs are written with abstract data types; and 6. design of OVM drivers is significantly simplified. A highly configurable network-on-chip project within Sonics was used as a test bed to implement new verification methodologies with OVM. Significant challenges were encountered with OVM, as it has shortcomings with addressing configurable IP. As a result of attending this session, designers will have a new understanding for how to approach testing with OVM. The session will make it possible to save 5-35% of the verification schedule.
Integrating Sequential Logic Equivalence Checking with an Automated Flow
from SystemC to Silicon
Felice Balarin, Cadence; Gagan Hasteer, Calypto View Session | Download PDF The way to deal with increasing design complexity is to increase the level of abstraction. High-level-synthesis (HLS) is an emerging technology aimed at this. HLS tools, like Cadence C-to-Silicon Compiler (CtoS), take a ...
design description at a high level in SystemC, and synthesize RTL from such a specification. It is well known that verification represents a major part of overall design effort and cost. It is thus equally import to develop new verification technologies and methodologies to complement the design side. Widely used logic equivalence checking technology is not sufficient, because HLS includes transformations like pipelining that do not preserve the design's cycle-by-cycle behavior. Calypto's SLEC tool introduces sequential logic equivalence checking technology that can verify equivalence between SystemC on CtoS input, and RTL Verilog that CtoS generates, even when transformations like pipelining are used. We will describe the integration of CtoS and SLEC where CtoS provides additional design information to SLEC so that it can efficiently accomplish this task. This information includes both the precise relation of I/O signals needed to formulate the equivalence condition, and information about internal equivalence points that SLEC can use to dramatically speed up verification. We will use actual customer designs to illustrate how the integration enables efficient, automatic verification of realistic designs.
Strategic Software-Dictated Hardware Design Gary Stringham, Gary Stringham Associates, LLC View Session | Download PDF This paper will present a case study of an ASIC block with an unusually high number of defects. Some defects were very difficult to diagnose and others required complex workarounds in the device driver to avoid respins. ...
Writing the device driver took more than 12 months. Research showed that many common-sense practices in the hardware/software interface had not been followed. Further research collected many hardware/software interface best practices that were documented and deployed for all ASIC blocks. These hardware design best practices are focused on how software interfaces with hardware. In other words, aligned with the EDA360 vision, software dictates the hardware design. The best practices also promote collaboration, reuse, and first-time-right silicon. They are methodologies that can be employed regardless of the EDA tool set used. Some best practices can be automated, supported, and/or enforced with existing and/or enhanced EDA tools. Hardware design errors can lead to chip respins costing $1,000,000 and 3 months. Following best practices will eliminate or mitigate these hardware design errors, reduce system integration efforts, save $12,500 for each engineering man-month spared, and get quality products out the door sooner. This paper will discuss some of the best practices and how, in the case study, the device driver development time was significantly reduced with the next version of the block.
Functional Integration-Level Verification Strategy Using Incisive Software
Extensions, AMS, Specman, and Incisive Enterprise Manager
Joel Artmann, Medtronic View Session | Download PDF The features included in many medical devices are implemented with electrical components, custom ICs, and PCB technologies. The reliability requirements, features, and schedule around these products require a solid ...
virtual functional verification strategy. The verification task for the project relating to this paper was the integration of digital and mixed-signal design components as well as firmware components. The project was to deliver a system that integrated digital ICs, mostly analog ICs, and electrical components onto a hybrid/PCB. These components were required to perform individual functions that, when integrated, provided a usable medical device feature. The verification problem, as a whole, was addressed by 1. Breaking the problem down to appropriate levels; 2. Establishing verification plans that identified requirements-based goals at each level; and 3. Executing verification tasks that provided plan-measurable results at each level. For functional verification, IC-level verification environments were established around ICs and components. These verification environments utilized constrained-random and AMS technologies. The goal of these individual environments was to verify and build confidence in the individual functions they were required to perform. At the integration level, i.e. multi-IC and hybrid/device, the ICs and electrical components were brought together in a homogeneous simulation environment.
Tips and Pitfalls for the First-Time OVM User Dan Steinberg, Google; Dorit Kerem, Cadence View Session | Download PDF The Open Verification Methodology (OVM) is a very powerful and useful solution to enable verification of complex SoCs and ASICs. When embarking on a new project utilizing a new methodology, it’s always a good ...
idea to see what can be learned from others who have recently been down the same path. In that vein, this paper explores real-world user experiences with the OVM on a new project. In addition to highlighting several aspects of the OVM, specific recommendations and pitfalls will be pointed out, including relevant code snippets. Some of the topics discussed include error reporting, configuration objects, sequence layering, and techniques to achieve a 'compile once, run many' strategy. It is the author's hope that this paper will be useful in helping others painlessly incorporate the OVM into their verification flow.
Automated Self-Checking Mixed-Signal Verification Using Specman-AMS Gregg Sarkinen, Medtronic View Session | Download PDF Most of today's designs are mixed signal, and mixed-signal verification remains one of the biggest design challenges. This requires unprecedented integration of analog and digital content, without compromising ...
performance or size. Medical devices follow the same trend. Medical ICs are also almost universally mixed signal in nature since they must interface with the analog world of the human body. Rising fabrication costs and project schedule dependencies require that the chip be correct on first silicon, putting much more pressure on the verification team. The focus of this paper is on the use of the Specman-AMS flow to create a reusable closed-loop verification environment suitable to verify functional behavior of a mixed-signal IC and to also be reusable in system-level verification. Cadence tools (Incisive Enterprise Simulator, Incisive Formal Verifier, Specman, Spectre, AMS Designer, etc.) are extensively used in the verification effort, which is performed prior to fabrication. In this paper, we will highlight how Medtronic used Cadence verification tools to improve overall productivity and quality of the IC. Importantly, we will highlight how the coverage-driven verification approach as well as a closed-loop constrained-random verification flow are used for mixed-signal chip verification.
Improving Verification Productivity with Verification Environment Reuse Swapnajit Chakraborti and Gurudutt Bansal, Cadence View Session | Download PDF The Open Verification Methodology (OVM) provides an open, interoperable, SystemVerilog verification methodology for complex SoC functional verification. The base-class library provided by OVM, along with the ...
prescribed methodology, enables the construction of a reusable verification environment as well as reusable verification IP. An OVM-based testbench typically contains components required for constrained-random stimulus generation and coverage monitoring. While traditionally testbenches are considered to be the most important component of any verification environment, considering the complexity of the verification process of today’s SoC designs, it is critical to be able to compare verification goals to verification results at a higher level of abstraction than just test completion or functional coverage hits. Measuring and tracking verification closure of features and the ability to track verification progress by using a verification plan are key aspects of successful OVM usage. This capability is provided by the Cadence metric-driven verification (MDV) methodology. The MDV approach, combined with the OVM, addresses the verification challenges of the most complex SoC designs. This paper will illustrate that reuse in OVM (or UVM) is only as good as the reusability of the verification environment.
Assertion Synthesis to Drive Formal, Simulation, and Acceleration Dr. Yunshan Zhu, NextOp Software View Session | Download PDF The value of assertion-based verification (ABV) is well established and well understood. As statements of design intent, assertions document intended behavior and become a critical companion to the RTL code that ...
implements the design. Assertions are the cornerstone of formal analysis, providing the targets for proofs and bug discovery. Assertions add visibility in simulation and ease debug by pointing to the source of failing tests. Assertions even run in hardware platforms for simulation acceleration. With all these advantages, it might be expected that every design and verification engineer would use assertions extensively. Although there has been a big upsurge in adoption recently, more than half of the engineers designing and verifying chips make little or no use of assertions or ABV. The most common reason cited is that they find it difficult to specify the assertions. Some engineers have trouble making the conceptual leap to assertions orthogonal to the implementation; others find the specification languages hard to use. Various forms of automated assertion generation have been tried over the years in an attempt to overcome these objections. This session presents a new approach to generation that yields assertions of far greater complexity and value than earlier methods, thereby encouraging the use of ABV and receipt of its benefits. The generated assertions run in simulation, formal analysis, and hardware so that they can be used throughout the entire chip development process. Specific results from actual customer projects will be provided to validate this approach.
Real Portable Models for System/ Verilog/ A/ AMS Bill Ellersick, Analog Circuit Works View Session | Download PDF This paper presents a standards-based modeling and simulation methodology that is portable and efficient. The real-value discrete-time Verilog behavioral models of mixed-signal circuits simulate accurately and efficiently. ...
To enable model portability across variants of the Verilog language, a set of define macros are presented. A Verilog-A testbench verifies both the model and the transistor-level design to ensure correspondence. This paper will show how using the methodology increases the choice of mixed-signal circuits and EDA tools for SoC designers, while expanding the markets of circuit and EDA tool providers and improving semiconductor industry efficiency.
Advanced Device Development using Modern Programming Methodolgies Ted Paone, Cadence Design Systems View Session | Download PDF To take advantage of the powerful tools in an IC 6.1 design flow, the designer must be working with a robust design kit. The kit must support all of the different tools in the design flow; this requires that the devices ...
in the kit have a complete and correct set of the data required by the tools in the flow. Cadence is developing an object-oriented device infrastructure to support your efforts in developing robust design kits. It is important to focus on devices, which are the complete data representation of the circuit, instead of just Pcells. There are complex data relationships among the elements of a device such as the CDF, Pcells, schematic, symbol, callbacks, and simulation data. Data changes in a single element often require multiple edits on the other elements to keep everything in sync. A single source for the device description compiles into all of the elements of the device. This methodology eases initial entry, minimizes the amount of rework, and ensures that your designers have devices that properly represent the process and support the tools they need. The object-oriented device infrastructure (OODI) is built in SKILL and SKILL++. It defines an extensible methodology for the rapid creation and assembly of devices, and the integration of those devices, into the PDK.
Leveraging the Cadence OA ITDB to Create Process-Option-Based PDKs Asif Khan, Texas Instruments View Session | Download PDF Recent focus on giving designers a better understanding of technology options and support for multiple process flows from a single PDK has driven the development of a new capability for TI's analog PDKs. TI's ...
Analog PDK Automation and EDA teams have been jointly developing the "Component vs. Flow" capability by leveraging the OpenAccess Incremental Technology Database (ITDB). Component vs. Flow provides a data-driven project setup capability that allows a design team to build a project-specific PDK that ensures feature compatibility and excludes unnecessary components and masks based on the initial project setup choices. PDK ITDB libraries are auto-generated from a single source to ensure consistent results and to improve quality and productivity. This presentation will provide an overview of the new capability including the automation techniques used for building the technology files and design kits.
B0 in A1: Implementing Metal-Stepping Mega ECOs and the Associated Risks/
Rewards
Ranjit LoboPrabhu, Netronome Systems Inc., Robert Dwyer, Cadence View Session | Download PDF Metal steppings have been traditionally used to make small logical changes. Undertaking a large logical ECO in a metal stepping can achieve significant cost savings compared to an all-layer change but introduces risks. ...
Having design tools as well as a methodology/process in place provides a path for reducing both cost and associated risk. The cost savings achieved are in terms of schedule as it takes a shorter time to fabricate. Implementing the ECO in fewer metal layers also yields further cost savings in terms of mask-layer costs. This is relevant to all silicon corporations, big and small. Encounter Conformal ECO Designer and Encounter RTL Compiler - Physical provided all the capability to do this.

The paper goes through a workflow that displays the results we were able to achieve. A lot of the work involves a designer realizing what physical limitations there are on spare cells and how to come up with a balance to achieve the necessary results. In addition, the utilization of Cadence design services provided a seamless and sturdy interface to the verification and tapeout processes. In this presentation, we provide a case study of how we implemented changes in a PCI Express controller to enhance performance. The session will provide attendees with a better understanding of implementing ECOs in a Cadence environment. The end result is a cost savings and a schedule of savings of 6 weeks in an overall project schedule of 3 months.
Mixed-Signal Simulation Experience with Multi-Chip Module Design Sundaram Sangameswaran, Texas Instruments View Session | Download PDF Current day mixed-signal simulations are challenged with designs rapidly changing into multi-chip module (MCM) featuring multiple cores using multiple process design kits (PDKs). This paper presents the challenges we...
faced in top-level simulation of complex mixed-signal design using Virtuoso AMS Designer. The mixed-signal design had DC/DC buck converters, LDOs along with other design units in one analog process, in closed loop with a micro-controller and microprocessor design in another process node. The complexity was roughly 350K gates with RAM, FLASH, and ROM. Setting up this of a large design, in terms of correct partitions, was the first and foremost challenge. Sub-configurations were used tactically at various design boundaries to ease the partitioning process.

Secondly, supply-sensitive simulation was a must, which involved compiling functional blocks with supply-sensitive constructs and using scope disciplines at correct design partitions to reduce the elaboration time. To obtain the correct delays, backannotation of digital timing SDF to match the correct design hierarchies was a challenge due to dynamic digital design changes and the alignment scripts. Multiple iterations were performed to optimize the view bindings to reduce the netlisting, compilation, and elaboration time of AMS Designer, which later paid off in design debugs and performance of various checks before tapeout. This presentation shows how we not only managed to simulate and analyze the entire top level by fixing critical design bugs before tapeout, but how we also realized first-silicon success with perfect silicon-simulation correlation.
SKILL Rejuvenated: Extending SKILL Language Syntax Sylwester Warecki, Freescale View Session | Download PDF Cadence SKILL language, providing among other features DFII database access, user interface manipulation, and inter-process communication (IPC), allows for full-custom design with an arsenal of powerful...
programmable cells (Pcells), multiple simulator selections through Virtuoso Analog Design Environment (ADE), and flexible technology access and definition. The language offers numerous customizations including the procedural syntax extension, which is based on built-in macros. However, some popular forms of syntax (largely accepted by other, younger languages like PERL or Python) are unavailable. In particular, operator overloading confines the developer to a limited space of basic SKILL syntax.

This paper presents a technique that allows for implementation of practically any new SKILL language extension with a single, specialized syntax module. Thanks to a proposed solution, constructs such as multiple assignment : (a b c ) = (d e f) or compound assignment *=, += , became possible. In addition, operator overloading and new operator injection introduce long-awaited paradigms in SKILL code development. The infrastructure of the enhanced interpreter as well as examples of some implemented operators are presented.
Convergent Silicon Realization via Early Elimination of Yield Detractors
Using Integrated DRC+ in Digital and Custom Implementation Flows
Vito Dai, GLOBALFOUNDRIES View Session | Download PDF With increasing design cost and time-to-market pressure, a redesign or several weeks delay because of poor yield may mean the financial death of a project and the subsequent loss of market window opportunity....
At 28nm and below, manufacturing challenges are such that minimum DRC rules fail to capture too many potential yield issues, whereas global application of relaxed DRC rules causes an unacceptable increase in design area. GLOBALFOUNDRIES recently announced an innovative DFM approach called DRC+ that is more than 100x faster than traditional litho simulation. DRC+ leverages fast 2D pattern matching to search designs for potential yield detractors and mark them for fixing with relaxed DRC rules.

To enable DRC+ for designers, GLOBALFOUNDRIES has made available the industry’s first 28nm pattern library of potential yield detractors. Cadence has been an early development partner with GLOBALFOUNDRIES in the development of the DRC+ flow, which leverages Cadence pattern classification technology. This paper will describe what, when, why, and how designers incorporate DRC+ into an existing digital implementation flow as part of DFM signoff at 28nm and below. With Cadence pattern matching and automated fixing built into Encounter technology, designers can quickly and efficiently identify and fix DRC+ errors, thereby avoiding potential manufacturability issues down the road.
Encounter Digital Implementation System: the GUI's API—Past, Present, and
Future
Jason Gentry, Avago Technologies; Robert Dwyer, Cadence View Session | Download PDF Encounter Digital Implementation (EDI) System allows for customization at the GUI level, but has recently changed the API for accessing the various aspects of the GUI. Upon realization of the extensive amounts...
of GUI customization done by their customers, Cadence focused on providing full-featured, albeit protected, APIs for adding on to their base EDI GUI. This joint Avago/Cadence session will cover topics dealing with explaining the differences between the older, Tcl/Tk-based GUI and the current Qt-based GUI.

The session will also provide examples and source code comparing and contrasting how certain tasks were done between the two GUIs. This session will give a preview of what enhancements to expect from the GUI in upcoming releases of EDI System. Example menus and source code will be provided showing what Avago Technologies has done to extend EDI System for activities such as custom-routing porosity solutions, special-net wire creation, and other tasks. This session will provide easy-to-implement GUI examples to attendees who are responsible for deploying custom solutions to other EDI System users.
Less Pessimism by Applying Design-Specific OCV Analysis Michio Komoda, Renesas View Session | Download PDF Traditional deterministic static timing analysis (STA) is widely used in digital design today. But with smaller process nodes, variation increases, which makes the STA approach very pessimistic. Statistical static...
timing analysis (SSTA) can reduce this pessimism in STA by analyzing the aggregate probability of delay over a path that reduces overall pessimism. However, the concept of statistical timing analysis is quite different from traditional deterministic STA methods. This makes the practical application of this technology more difficult. The design-specific OCV (DS-OCV) technology available from Cadence helps to address this difficulty by simplifying the design flow while still leveraging the pessimism-reducing benefits of SSTA. This presentation introduces the DS-OCV concept and details the experiences of Renesas Electronics in applying it in a production design flow.
RTL Power Profiling for Early Analysis and Optimization of Power for Design and
Verification Teams
Nandini Chintala, Cadence View Session | Download PDF Power number requirements are becoming a more and more prevalent specification for design teams. However, there are no good ways to measure power at the design stage beyond gross architectural ...
estimations. In this paper, we describe a methodology and design process for early measure of power directly at the RTL level during the design and verification (D&V) stage. By using an integrated simulation-based approach alongside block-level synthesis, power can be measured and profiled at the block and system level. It further allows D&V teams to optimize a design and conduct various architectural explorations at the system level.
Experiences in Designing, Simulating, and Building a 72-Layer Logic Board
with Multiple Cadence Tools
Joseph Socha and Leo Garza, Sedona Intl View Session | Download PDF Integration occurring inside the package through advanced node SoCs or multi-die SiPs is producing higher pin-count devices that challenge traditional large, dense logic board design. Moore’s Law continues to ...
show ever-increasing densities as prices continue to drop. This presentation discusses a case study where the Cadence front-to-back Allegro PCB solution was used in schematic, layout, constraint management, and high-speed analysis. Additional third-party tools were integrated in this process to extend the analysis capabilities of the tools to build a first-turn success. The PCB in this case study is the largest Allegro database on record, with an impressive layer count of 72 layers, multiple 5,000 pin parts, miles of interconnect trace, and power requirements in the thousands of amps—all on a single board.
Sense and Power: Making SMPS Projecting Easy with eDesign Studio and
the OrCAD Capture Flow
Carmelo Vicari and Francesco D'amico, STMicroelectronics View Session | Download PDF To continue their heritage of innovation and leadership, pioneers in analog and power semiconductors must also provide design support and tools that allow customers to perform simulations of leading-edge analog and...
power chips. Analog products are already supported by ST's SMPS@eDesign Studio free online tool, which was conceived specifically to help in the design and simulation of switch-mode power supply (SMPS) systems. This tool simplifies power supply design and offers designers the flexibility to choose the right products and topologies. A full design is generated from the high-level requirements specified by the user, and provides: all relevant parameters and results; a full and interactive schematic; a full and interactive bill of materials (BOM); and a full set of analysis diagrams.

To improve accuracy without increasing computational complexity, a new IC evaluation platform using macromodels was developed, integrating the online tool with Cadence PSpice OrCad technology. This approach enables the robust and widely diffused Spice OrCAD platform for the simulation of ST's analog and power product families. From now on, designs obtained via SMPS@eDesign Studio can be more accurately simulated within the OrCAD platform using PSpice. PSpice OrCAD is a full-featured, native analog/mixed-signal circuit simulator that is considered the de-facto industry-standard Spice-based simulator for system design.
Case Study for Implementation of Cadence Substrate Noise Analysis into an
IC.6.1.4 Environment
Pilar Hsue, Independent Consultant View Session | Download PDF Many mixed-signal designs are A/d (big analog, small digital). As the size of these designs continues to increase for deep submicron design, substrate noise becomes an issue. Substrate noise is caused by either...
intrinsic semiconductor noise (thermal, shot, burst, etc.), or by enough digital switching noise to disturb sensitive analog circuits on a mixed-signal design. This paper presents a case study of a customer using the Cadence QRC substrate noise analysis flow. Verification of the results from SNA (substrate noise analysis), a necessary step, requires analysis involving qualification of process profiles, Assura runset implementation, and post-layout simulation. Does the flow just 'magically' work? How do you make sure various techfiles interact with each other correctly? What are the tool limitations? This paper will answer these questions and describe how to successfully integrate these tools into a repeatable, usable process.
Chip Finishing for Advanced Node SOCs – Challenges beyond Design Sign-off Milind Weling, Cadence View Session | Download PDF Consumer demand is forcing IC manufacturers to push the limits of performance and functionality for SoCs. Today, chips have scaled to include thousands of IP blocks using more than a billion transistors, and the...
complexity only increases at 28nm and beyond. IC design is no longer a monolith; instead, it is a mosaic of processing cores, on-chip memories, and IP blocks from various third-party sources, complicating the chip finishing process at advanced nodes. There are many variants of the chip assembly process: one uses the physical design and editing environment; another assembles chips during place and route.

This paper, however, analyzes a third variant of chip finishing that is gaining popularity. It involves assembling chips in GDSII or OASIS format, where design database sizes range from 10s to 100s of GB and require minor edits to the layout data prior to its handoff to the mask shop. This phase of the design flow requires fast turnaround time, rapid layout viewing of GDSII or OASIS, nimble analysis, flexible layout manipulation, and seamless integration with physical verification tools. We will present advanced node SoC case studies of chip finishing flows that meet time-to-market demands for advanced node designs, and the benefits of using Cadence QuickView and its deep integration with Cadence Physical Verification System (PVS).
IC 6.1 Foundry Methodology to Release PDKs (Analog) with Standard Digital Libraries Ofer Tamir, TowerJazz View Session | Download PDF Foundries should define a clear structure of the IC 6.1 PDK for mixed analog-digital flows. Currently, each foundry defines their own methodology, which makes it hard for customers to build their mixed-signal...
and leverage all the benefits that the Cadence environment offers. TowerJazz will present a methodology of such a structure that was defined with the Cadence methodology team. It is the first time that such a methodology is clearly defined with takes advantage of all OA benefits, such as hierarchical structure of digital over the analog definition and clear vias definition in each level. We will define the place of TECHLEF and LEF in the new methodology – needed or not?. In this presentation, we will go over the defined structure and give examples of 3 library levels and the content of each.
Simulating PLL Jitter and Phase Noise Using Transient Noise Analysis Bob Mullen, Cadence; Philip Chen, TSMC View Session | Download PDF Using Spectre transient noise analysis can be an effective means to measure PLL circuit performance and correlate better with silicon as compared to a regular transient. Important PLL measurements such as period ...
jitter, absolute period jitter, and phase noise can be easily determined and plotted after simulation. Since transient noise can impact simulation time, using the Spectre APS High-Performance Option can reduce the simulation time. Results are shown from a PLL design in the TSMC 28nm process.
Pcell Generation with the Cadence PAS Tool Romain Feuillette, ST Microelectronics View Session | Download PDF The main PDK demands today are focused on 2 issues: development of nanoscale processes and supporting new features on mature technologies. The first one requires innovation and flexibility; the second one ...
requires an efficient method to quickly deliver the improvements to the customer. The Cadence PAS tool allows you to generate Pcells based on the GTE file format. This method is both easy to use and efficient. Moreover, it allows you to deal with complex problems such as via filling of polygons, which can save weeks of development time. The possibility of CSV format translation increases the safety of PAS. Most of our benchmark has been done at the 65nm node, but also on 0.35µm in both OA and cdba.

A production delivery has been made in 20nm including some Pcells generated with PAS. As an example, via filling of complex polygons has shown the efficiency of PAS: via filling of 45° rectangle: 4 GTE frames instead of 300 SKILL code lines, and via filling of octagon: 6 GTE frames instead of 500 SKILL code lines. Undeniably, the use of GTE increases productivity for Pcell development. GTE also increases the quality of the SKILL code generated by PAS, because complex functions such as via filling are supported by Cadence technology instead of requiring Pcell developers to make them and maintain them endlessly.
A NanoRoute-Based Approach to DFM Optimization for the GLOBALFOUNDRIES
28SLP Process
Rainer Mann and Steffen Seeling, GLOBALFOUNDRIES View Session | Download PDF Physical design rules are increasing in number and complexity for advanced technology nodes such as the GLOBALFOUNDRIES 28SLP process. This alone implies severe challenges to the routing technology to ...
achieve design rule closure while meeting design constraints such as density, timing, power, and signal integrity goals. In addition to the required rules, advanced technology manuals call out a set of recommended design rules that, if satisfied by the layout, improve manufacturability and yield. Recommended rules impose a new optimization target for modern routers because of their soft and opportunistic nature: the router needs to approximate or satisfy the recommended rule by utilizing layout opportunities without compromising either required rules or design constraints.

This paper demonstrates a Cadence NanoRoute-based methodology to implement recommended rules on top of the required design rule closure for the GLOBALFOUNDRIES 28SLP process. It discusses in detail the additions to the rule modeling within a technology file for NanoRoute as well as the necessary NanoRoute flow enhancements. The paper shows the application of this methodology to a set of test cases and compares resulting design metrics as well as runtime impacts for different variants of recommended rule implementation. This information, plus ready templates, will help attendees make an educated choice on the recommended rules methodology.
Fullchip Floorplanning and Timing with ASIC Vendor’s Processed Netlist
Using the Cadence Encounter Digital Implementation System
Stanley Peng, Cisco Systems View Session | Download PDF In an ASIC flow netlist handoff, the early stage floorplanning is crucial to RTL synthesis and top level congestion/timing aspect during the implementation process. To any specific chip, the methodology...
of fullchip integration and processing is adopted in-house in order to detect any sign-off shortfall. Furthermore, the resulting integrated fullchip netlist can be used by implementation tool such as the Cadence Encounter Digital Implementation System to perform necessary power planning; module sizing and floorplanning; module pin assignment; placement and routing congestion analysis, partitioning for module place-and-route with timing predictability; and top level congestion and timing analysis.

The purpose of floorplanning in-house before ASIC flow handoff enables quality check-off and feasibility of floorplanning that shortens the time and effort spent in vendor´s site. Detail process of the flow will be presented such as top level floorplanning, power, module pin assignments, module partition, placement and routing congestion analysis, top level timing analysis. Specific example from a 45nm technology will be used.
Analog Layout Automation: A Transition to 6.1.x at TI Donna Ducharme, Texas Instruments View Session | Download PDF Transitioning users of 5.1.41 analog layout automation to 6.1.x is a large task. We see this as opportunity in EDA, yet layout sees it as a setback. We need to keep engineers productive and positive about change by...
getting them up and running as smooth as possible. In addition to this, adopting a new routing tool for analog users seemed daunting and required many infrastructure changes in the PDKs, impacting schedules. Infrastructure for our PDKs that were once considered "e;state of the art"e; needed maturing to enable our users to leverage the new capabilities.

All of this takes development time, tool testing, support training, user training, and deployment. This paper explains the Analog Layout Automation Road Map at TI. It will detail the reasons that are behind this plan as well as the stumbling blocks that we came across in implementing it.


Browser support:
Windows - Internet Explorer 6 or above, Firefox 2+, Chrome 1+ with Silverlight 2+ or with the Port 25 Windows Media Player Firefox Plugin and Windows Media Player 6.4+. Mac OSX – Safari+, Firefox2+, Chrome with Silverlight 2+.

If you need assistance accessing CDNLive! ON-Demand content, contact Cadence.