Welcome to CDNLive! On-Demand
San Jose was the setting for CDNLive! Silicon Valley 2010, a jam-packed day featuring a
morning of general sessions, networking luncheon with Cadence R&D, an afternoon of
technical sessions with 6 tracks, and an evening of Cadence and partner exhibits.
At the event, Cadence also unveiled its new, holistic approach to Silicon Realization, a key
component of the EDA360 vision.
To access the 6 tracks of technical sessions, please log in with your Cadence.com account (corporate email required). If you don’t have an account, create one here.
See browser support below for playback of multimedia sessions.
Access to technical sessions requires logging in with your Cadence.com account
(corporate email required).
In this dynamic opening to CDNLive! Shapeology performers depict the transformation occurring in EDA by taking the audience on a visual tour of where the electronics industry has been and where we are headed
in the future of Realizing EDA360.
Lip-Bu Tan, President and CEO, Cadence
EDA's biggest technology challenges—HW/SW integration, IP optimization, and increasing software content—are driving the need for innovation, globalization, and ecosystem collaboration. Demand for killer
apps, continuous connectivity, mobility, and green design hold exciting possibilities for EDA. Market forecasts anticipate 32% growth for semiconductor next year, with numerous prospects in China and more frequent acquisitions.
Cadence Product Strategy
John Bruggeman, Senior Vice President and Chief Marketing Officer, Cadence
Revitalizing the EDA industry from a profitability standpoint requires shifting from a hardware-first approach to an application-driven business model. Get a high-level view of the Cadence product strategy for each layer
of the EDA360 vision. Highlights include an open standards-based ecosystem that delivers accurate models, a virtualization platform for model selection, and tools that optimize and verify IP for integration.
Cadence Silicon Realization Overview
Chi-Ping Hsu, Ph.D, Senior Vice President, Research and Development, Silicon Realization Group, Cadence
Productivity and predictability issues are making it crucial for engineers to optimize functional, electrical, and physical specifications concurrently rather than in the typical EDA silos. This close look into Silicon
Realization reveals three critical requirements: unified design and verification intent; higher levels of abstraction; and convergence of late-stage design/manufacturing data into the early phases of design.
Realizing End-to-End Mixed-Signal Design
Dave Desharnais, Product Marketing Group Director, Silicon Realization, Cadence
An in-depth technical discussion and demonstration on how the three key elements of Silicon Realization—-intent, abstraction, and convergence—can be applied to mixed-signal challenges and deliver an
end-to-end, predictable path to silicon success. Key concepts include analog behavioral modeling, design (power) intent for mixed-signal IP, analog/digital interoperability, and mixed-signal design closure.
design, and shrinking market windows, the challenge of completing the verification plan on time has increased manifold. Time-critical tests, with requisite scoreboard monitoring, take days of simulation runtime imposing protracted schedules on high-quality assurance milestones. Complementing verification with in-circuit emulation (ICE) and/or FPGA-based prototyping can provide much-needed relief in performance and the ability to verify with real-world directed stimulus and response, but exploration of the realm of deep corner-case bugs with random stimulus still remains. Both of these verification modes predominantly require full design synthesizability and availability of software drivers to achieve higher quality modeling. Hardware acceleration of the existing testbench not only allows the discovery of deep bugs by enabling any simulator to be accelerated while maintaining metric-driven verification methods and use models, but also serves as a unique bridge between simulation and ICE. This early-phase acceleration, which enables gradual movement of behavioral design blocks into the synthesizable domain, allows smoother transition into and faster bring-up of ICE, thereby improving overall productivity and reducing time to market.
Metric-Driven Verification Using TLMs
Per Edstrom and Ajay Goyal, Cadence
Automated RTL verification relies on the existence of an independent “golden” reference model to ensure that algorithmic data transformation correctness is established. As per the current verification flow, design
verification happens at various abstraction level viz. algorithm model, TLM protocol stage, SC signal stage, RTL stage. Every stage has its own verification environments and there is very little reuse, which makes the verification effort huge and duplicate in nature. This paper shows that by creating a new metric-driven verification environment that first fully verifies the untimed algorithm using transaction-level modeling (TLM) and then later reuses the same verification environment for other abstraction levels, we were able to reduce the overall verification effort significantly. This approach provided a dramatic improvement in verification productivity and helped us reduce the time spent on RTL verification.
However, as design size grows, various EDA tool runtimes increase and the performance necessary to do the job is not available. Verification engineers have a narrow window to validate product requirements (for both functional verification and low-power verification). As a result, they typically focus on IP or short tests only. With aggressive time-to-market requirements, verification engineers do what they can, but it is frequently not enough. This paper describes an environment and an approach that can enable system-level verification by offering a high-performance verification computing platform, together with a methodology, to create system scenarios and run them on a prototype of the SoC, to analyze, test, and optimize the design’s low-power techniques.
and the use of SystemC TLM2 modeling as a key component of a functional verification methodology may be one such serendipitous by-product of the ESL vision. An OVM testbench incorporating a TLM2.0 reference model has several demonstrated benefits: 1. TLM2.0 models are ideally suited as virtual prototypes for early software development; 2. TLM2.0 models are often available before a working RTL simulation model and enable earlier spec validation and testbench development; 3. As specs evolve, adding new features and modifying existing features can be as simple as updating the reference model without changes to other stimulus generators or checkers; 4. TLM2.0 models are a useful repository for checking and debug or reporting logic; 5. TLM2.0 models improve functional coverage planning and collection by allowing direct access to "virtual" DUT states without direct probing of the DUV. A chip-level SystemC TLM2.0 model was used as part of a metric-driven methodology to functionally verify the design of NXP’s PCU9668 I2C controller. This paper describes what was learned and some of the challenges inherent in this approach. As a stepping-stone to a top-down ESL methodology, this approach has many parallels that will be discussed.
engineers is critical to optimizing product quality while controlling costs and meeting aggressive timelines. This IBM Rational presentation discusses key elements of rapid co-development and explores how the use of shared processes between hardware and software development enable these teams to co-develop smarter products more efficiently. Once shared processes are established, common tools can drive the next level of efficiency. Integrations between key solutions from IBM Rational and Cadence form the bridge between hardware and software development. IBM Rational supports co-development of complex electronics in several ways: management of hardware and software requirements and their inter-relationships; upfront system modeling to determine which aspects of the design will be implemented in hardware vs. software; and common configuration management and defect tracking for both hardware and software development. IBM has successfully used these techniques internally across a 25,000+ user base to significantly reduce development costs, increase reuse, and improve quality. Integrating these solutions with Cadence solutions for hardware design provides an optimized environment for co-development of hardware and software.
times are shrinking, so software is becoming a significant aspect for system design and verification. Creating application software rapidly, debugging efficiently, optimizing performance and power, and completing hardware/software co-verification is more complex than ever. In this presentation, you will learn about the latest solutions from ARM and Cadence for ARM-based embedded and application software creation, debug, and functional co-verification; the ARM Fast Models are used to enable early software development and co-verification with Incisive SystemC simulation; the ARM VSTREAM running with the Palladium XP Verification Computing Environment enables accurate co-verification when the task requires 100% accurate co-verification with ARM cores and RTL SoCs; and both solutions sharing the widely used RealView Development Suite (RVDS) for software development.
risks, costs, and time to market. That’s why architectural exploration must be performed as early as possible in the design cycle. The proposed solution consists of creating executable specifications based on TLM SystemC models of both software and hardware, automatically generated from graphical capture. This approach provides key benefits for early design-space exploration and low-power optimization. It can be fully integrated to high-level synthesis flows based on Cadence C-to-Silicon Compiler. Integration to UVM verification flows with Incisive tools is also available. Initially, the application and its testbench are modeled and simulated to functionally validate the system’s behavior. Then, this workload is mapped onto a platform including models of processors, buses, an OS, and hypervisors, and the simulation enables a quick what-if analysis for the different alternatives without relying on the availability of hardware IP models. An LTE application will be provided as example. Instead of taking months to develop handwritten SystemC models or set-up lower-level virtual prototyping environments, it only took a few weeks to create a model of the complete system and hence, perform the architectural exploration and optimization required.
platforms. Virtual platforms are a representation of the hardware platform that allow the complete software stack to be executed (simulated). The benefits of these virtual platforms are that the instruction-accurate models are relatively easy to develop and are available to the software development team early in the system or SoC development cycle. Virtual platforms also provide access for the complete development team, even in multiple locations. In this presentation, in addition to discussing embedded software development on virtual platforms, we will show how the integration of Incisive SystemC simulation, Incisive Software Extensions, processor models from OVP, and software simulation and verification tools from Imperas enables software functional verification. When the virtual platform is coupled with Incisive Software Extensions and Imperas software verification tools, software engineers can verify the functionality of code, such as drivers, in the context of the complete OS running on the platform. This software verification capability has not been possible until now. New technologies (fast simulation and verification from Imperas) and new flows (integration between Cadence and Imperas tools) have made this possible.
not willing to compromise on battery life. Therefore, the need for better power management at the SoC level is critical for consumer devices to succeed in the marketplace. Sonics will discuss how power-aware SoC designs must start at the architectural description and move down through the SoC to gate level for the most optimal designs. Included will be a description of various power management techniques along with the implementation tradeoffs in these approaches, such as clock gating, rapid power switching, atomic shutdown, and power handshakes.
Resolving the IP-SoC Tower of Babel
Kurt Wolf, Silicon-IP, Inc.; Vikram Phatak, Silicon Enablers; Kenneth Wang, Silicon-IP, Inc.
Separate IP design views are delivered to separate SoC integration teams, and inconsistencies between these views and subsequent quality issues are not discovered until handoff at each stage along the
implementation chain. Delays from weeks to months are caused by determining whether problems arise from the SoC design or the IP function/design view and the resulting time required to obtain the fix. This session provides a best practice, repeatable methodology to create fully validated IP that meets specifications and is consistent among all IP view and design-flow interdependencies. Specific areas covered include: independent and combinatorial IP view compliance, validation of all design deliverables to claims made on datasheets and related operation margins, and verified design-flow integration. The best practice methodology includes procedures from industry experts in the fields of IP due diligence and license negotiation, silicon validation programs, IP/SoC integration, and IP/datasheet validation. A composite case study (to protect confidentiality) is used to demonstrate each step of the methodology. By implementing these best practices, SoC design teams save 2-3 months of IP evaluation, verification, and integration time and money.
Rule checkers can help to some extent, but electromagnetic field simulators are required to see the full picture, both in pre-layout analysis and post-layout verification. Most layout tools have 2D simulators that can offer good insights, but for high accuracy and to address layouts with non-planar elements such as wirebonds, a full 3D simulator is mandatory. CST has offered a plug-in to Cadence tools for some years and more recently a direct import has become available that offers component recognition, layer editing, and net and area selection. A major collaborative project this year enables Cadence layout engineers to stay within their familiar layout environments and perform a full wave 3D extraction and simulation in the background.
verification. Analog IP is typically both designed and verified by skilled analog designers, predominantly using the analog-schematic environment. The task of functional verification is based mostly on visual inspection with very little automation for checking functionality and results. Most systems have to interface their millions of digital logic gates, DSPs, memories, and processors to the real world through analog components like a display, an antenna, a sensor, a cable, or an RF interface. Verifying correct behavior of large mixed-signal SoCs using analog models in Spice or Verilog-AMS is a big bottleneck for verification with digital logic modeled as RTL. Traditionally, digital verification engineers have made assumptions about the analog components, and analog designers likewise have made assumptions about the digital behavior. This is a rich source of errors. There is a need to apply advanced verification methodologies and techniques from the digital verification realm to analog components, while also balancing the need for speeds required to verify digital components vs. accuracy needed to model analog components. This paper is based on pioneering work being done through a partnership between LSI Shanghai and Cadence. A prototype was developed to demonstrate the benefits of applying an OVM-based verification flow to the verification of a complex analog block that is part of a live project. The advantages of this flow and positive results will be highlighted. The presenter will share his experiences in: 1. Applying the OVM to the verification of complex analog IP blocks and 2. Applying digital-centric mixed-signal verification (DMSV)-based techniques to model analog IP and use in an OVM-based verification environment.
used. Verification complexity can be significantly reduced by using byte, shortint, int, and longint for members of a Verilog interface. By dropping bit-accurate representations of interface members, there is no longer a need to parameterize an interface. The following advantages will be covered in this session: 1. handling of configurable IP without the need for parameterized interfaces; 2. verification of connections with different data widths is automatically achieved; 3. less code required to work with abstract objects; 4. easier to see solutions to verification problems; 5. fewer bugs are written with abstract data types; and 6. design of OVM drivers is significantly simplified. A highly configurable network-on-chip project within Sonics was used as a test bed to implement new verification methodologies with OVM. Significant challenges were encountered with OVM, as it has shortcomings with addressing configurable IP. As a result of attending this session, designers will have a new understanding for how to approach testing with OVM. The session will make it possible to save 5-35% of the verification schedule.
design description at a high level in SystemC, and synthesize RTL from such a specification. It is well known that verification represents a major part of overall design effort and cost. It is thus equally import to develop new verification technologies and methodologies to complement the design side. Widely used logic equivalence checking technology is not sufficient, because HLS includes transformations like pipelining that do not preserve the design's cycle-by-cycle behavior. Calypto's SLEC tool introduces sequential logic equivalence checking technology that can verify equivalence between SystemC on CtoS input, and RTL Verilog that CtoS generates, even when transformations like pipelining are used. We will describe the integration of CtoS and SLEC where CtoS provides additional design information to SLEC so that it can efficiently accomplish this task. This information includes both the precise relation of I/O signals needed to formulate the equivalence condition, and information about internal equivalence points that SLEC can use to dramatically speed up verification. We will use actual customer designs to illustrate how the integration enables efficient, automatic verification of realistic designs.
Strategic Software-Dictated Hardware Design
Gary Stringham, Gary Stringham Associates, LLC
This paper will present a case study of an ASIC block with an unusually high number of defects. Some defects were very difficult to diagnose and others required complex workarounds in the device driver to avoid respins.
Writing the device driver took more than 12 months. Research showed that many common-sense practices in the hardware/software interface had not been followed. Further research collected many hardware/software interface best practices that were documented and deployed for all ASIC blocks. These hardware design best practices are focused on how software interfaces with hardware. In other words, aligned with the EDA360 vision, software dictates the hardware design. The best practices also promote collaboration, reuse, and first-time-right silicon. They are methodologies that can be employed regardless of the EDA tool set used. Some best practices can be automated, supported, and/or enforced with existing and/or enhanced EDA tools. Hardware design errors can lead to chip respins costing $1,000,000 and 3 months. Following best practices will eliminate or mitigate these hardware design errors, reduce system integration efforts, save $12,500 for each engineering man-month spared, and get quality products out the door sooner. This paper will discuss some of the best practices and how, in the case study, the device driver development time was significantly reduced with the next version of the block.
virtual functional verification strategy. The verification task for the project relating to this paper was the integration of digital and mixed-signal design components as well as firmware components. The project was to deliver a system that integrated digital ICs, mostly analog ICs, and electrical components onto a hybrid/PCB. These components were required to perform individual functions that, when integrated, provided a usable medical device feature. The verification problem, as a whole, was addressed by 1. Breaking the problem down to appropriate levels; 2. Establishing verification plans that identified requirements-based goals at each level; and 3. Executing verification tasks that provided plan-measurable results at each level. For functional verification, IC-level verification environments were established around ICs and components. These verification environments utilized constrained-random and AMS technologies. The goal of these individual environments was to verify and build confidence in the individual functions they were required to perform. At the integration level, i.e. multi-IC and hybrid/device, the ICs and electrical components were brought together in a homogeneous simulation environment.
idea to see what can be learned from others who have recently been down the same path. In that vein, this paper explores real-world user experiences with the OVM on a new project. In addition to highlighting several aspects of the OVM, specific recommendations and pitfalls will be pointed out, including relevant code snippets. Some of the topics discussed include error reporting, configuration objects, sequence layering, and techniques to achieve a 'compile once, run many' strategy. It is the author's hope that this paper will be useful in helping others painlessly incorporate the OVM into their verification flow.
performance or size. Medical devices follow the same trend. Medical ICs are also almost universally mixed signal in nature since they must interface with the analog world of the human body. Rising fabrication costs and project schedule dependencies require that the chip be correct on first silicon, putting much more pressure on the verification team. The focus of this paper is on the use of the Specman-AMS flow to create a reusable closed-loop verification environment suitable to verify functional behavior of a mixed-signal IC and to also be reusable in system-level verification. Cadence tools (Incisive Enterprise Simulator, Incisive Formal Verifier, Specman, Spectre, AMS Designer, etc.) are extensively used in the verification effort, which is performed prior to fabrication. In this paper, we will highlight how Medtronic used Cadence verification tools to improve overall productivity and quality of the IC. Importantly, we will highlight how the coverage-driven verification approach as well as a closed-loop constrained-random verification flow are used for mixed-signal chip verification.
prescribed methodology, enables the construction of a reusable verification environment as well as reusable verification IP. An OVM-based testbench typically contains components required for constrained-random stimulus generation and coverage monitoring. While traditionally testbenches are considered to be the most important component of any verification environment, considering the complexity of the verification process of today’s SoC designs, it is critical to be able to compare verification goals to verification results at a higher level of abstraction than just test completion or functional coverage hits. Measuring and tracking verification closure of features and the ability to track verification progress by using a verification plan are key aspects of successful OVM usage. This capability is provided by the Cadence metric-driven verification (MDV) methodology. The MDV approach, combined with the OVM, addresses the verification challenges of the most complex SoC designs. This paper will illustrate that reuse in OVM (or UVM) is only as good as the reusability of the verification environment.
implements the design. Assertions are the cornerstone of formal analysis, providing the targets for proofs and bug discovery. Assertions add visibility in simulation and ease debug by pointing to the source of failing tests. Assertions even run in hardware platforms for simulation acceleration. With all these advantages, it might be expected that every design and verification engineer would use assertions extensively. Although there has been a big upsurge in adoption recently, more than half of the engineers designing and verifying chips make little or no use of assertions or ABV. The most common reason cited is that they find it difficult to specify the assertions. Some engineers have trouble making the conceptual leap to assertions orthogonal to the implementation; others find the specification languages hard to use. Various forms of automated assertion generation have been tried over the years in an attempt to overcome these objections. This session presents a new approach to generation that yields assertions of far greater complexity and value than earlier methods, thereby encouraging the use of ABV and receipt of its benefits. The generated assertions run in simulation, formal analysis, and hardware so that they can be used throughout the entire chip development process. Specific results from actual customer projects will be provided to validate this approach.
To enable model portability across variants of the Verilog language, a set of define macros are presented. A Verilog-A testbench verifies both the model and the transistor-level design to ensure correspondence. This paper will show how using the methodology increases the choice of mixed-signal circuits and EDA tools for SoC designers, while expanding the markets of circuit and EDA tool providers and improving semiconductor industry efficiency.
in the kit have a complete and correct set of the data required by the tools in the flow. Cadence is developing an object-oriented device infrastructure to support your efforts in developing robust design kits. It is important to focus on devices, which are the complete data representation of the circuit, instead of just Pcells. There are complex data relationships among the elements of a device such as the CDF, Pcells, schematic, symbol, callbacks, and simulation data. Data changes in a single element often require multiple edits on the other elements to keep everything in sync. A single source for the device description compiles into all of the elements of the device. This methodology eases initial entry, minimizes the amount of rework, and ensures that your designers have devices that properly represent the process and support the tools they need. The object-oriented device infrastructure (OODI) is built in SKILL and SKILL++. It defines an extensible methodology for the rapid creation and assembly of devices, and the integration of those devices, into the PDK.
Analog PDK Automation and EDA teams have been jointly developing the "Component vs. Flow" capability by leveraging the OpenAccess Incremental Technology Database (ITDB). Component vs. Flow provides a data-driven project setup capability that allows a design team to build a project-specific PDK that ensures feature compatibility and excludes unnecessary components and masks based on the initial project setup choices. PDK ITDB libraries are auto-generated from a single source to ensure consistent results and to improve quality and productivity. This presentation will provide an overview of the new capability including the automation techniques used for building the technology files and design kits.
Having design tools as well as a methodology/process in place provides a path for reducing both cost and associated risk. The cost savings achieved are in terms of schedule as it takes a shorter time to fabricate. Implementing the ECO in fewer metal layers also yields further cost savings in terms of mask-layer costs. This is relevant to all silicon corporations, big and small. Encounter Conformal ECO Designer and Encounter RTL Compiler - Physical provided all the capability to do this.
The paper goes through a workflow that displays the results we were able to achieve. A lot of the work involves a designer realizing what physical limitations there are on spare cells and how to come up with a balance to achieve the necessary results. In addition, the utilization of Cadence design services provided a seamless and sturdy interface to the verification and tapeout processes. In this presentation, we provide a case study of how we implemented changes in a PCI Express controller to enhance performance. The session will provide attendees with a better understanding of implementing ECOs in a Cadence environment. The end result is a cost savings and a schedule of savings of 6 weeks in an overall project schedule of 3 months.
faced in top-level simulation of complex mixed-signal design using Virtuoso AMS Designer. The mixed-signal design had DC/DC buck converters, LDOs along with other design units in one analog process, in closed loop with a micro-controller and microprocessor design in another process node. The complexity was roughly 350K gates with RAM, FLASH, and ROM. Setting up this of a large design, in terms of correct partitions, was the first and foremost challenge. Sub-configurations were used tactically at various design boundaries to ease the partitioning process.
Secondly, supply-sensitive simulation was a must, which involved compiling functional blocks with supply-sensitive constructs and using scope disciplines at correct design partitions to reduce the elaboration time. To obtain the correct delays, backannotation of digital timing SDF to match the correct design hierarchies was a challenge due to dynamic digital design changes and the alignment scripts. Multiple iterations were performed to optimize the view bindings to reduce the netlisting, compilation, and elaboration time of AMS Designer, which later paid off in design debugs and performance of various checks before tapeout. This presentation shows how we not only managed to simulate and analyze the entire top level by fixing critical design bugs before tapeout, but how we also realized first-silicon success with perfect silicon-simulation correlation.
programmable cells (Pcells), multiple simulator selections through Virtuoso Analog Design Environment (ADE), and flexible technology access and definition. The language offers numerous customizations including the procedural syntax extension, which is based on built-in macros. However, some popular forms of syntax (largely accepted by other, younger languages like PERL or Python) are unavailable. In particular, operator overloading confines the developer to a limited space of basic SKILL syntax.
This paper presents a technique that allows for implementation of practically any new SKILL language extension with a single, specialized syntax module. Thanks to a proposed solution, constructs such as multiple assignment : (a b c ) = (d e f) or compound assignment *=, += , became possible. In addition, operator overloading and new operator injection introduce long-awaited paradigms in SKILL code development. The infrastructure of the enhanced interpreter as well as examples of some implemented operators are presented.
At 28nm and below, manufacturing challenges are such that minimum DRC rules fail to capture too many potential yield issues, whereas global application of relaxed DRC rules causes an unacceptable increase in design area. GLOBALFOUNDRIES recently announced an innovative DFM approach called DRC+ that is more than 100x faster than traditional litho simulation. DRC+ leverages fast 2D pattern matching to search designs for potential yield detractors and mark them for fixing with relaxed DRC rules.
To enable DRC+ for designers, GLOBALFOUNDRIES has made available the industry’s first 28nm pattern library of potential yield detractors. Cadence has been an early development partner with GLOBALFOUNDRIES in the development of the DRC+ flow, which leverages Cadence pattern classification technology. This paper will describe what, when, why, and how designers incorporate DRC+ into an existing digital implementation flow as part of DFM signoff at 28nm and below. With Cadence pattern matching and automated fixing built into Encounter technology, designers can quickly and efficiently identify and fix DRC+ errors, thereby avoiding potential manufacturability issues down the road.
of GUI customization done by their customers, Cadence focused on providing full-featured, albeit protected, APIs for adding on to their base EDI GUI. This joint Avago/Cadence session will cover topics dealing with explaining the differences between the older, Tcl/Tk-based GUI and the current Qt-based GUI.
The session will also provide examples and source code comparing and contrasting how certain tasks were done between the two GUIs. This session will give a preview of what enhancements to expect from the GUI in upcoming releases of EDI System. Example menus and source code will be provided showing what Avago Technologies has done to extend EDI System for activities such as custom-routing porosity solutions, special-net wire creation, and other tasks. This session will provide easy-to-implement GUI examples to attendees who are responsible for deploying custom solutions to other EDI System users.
timing analysis (SSTA) can reduce this pessimism in STA by analyzing the aggregate probability of delay over a path that reduces overall pessimism. However, the concept of statistical timing analysis is quite different from traditional deterministic STA methods. This makes the practical application of this technology more difficult. The design-specific OCV (DS-OCV) technology available from Cadence helps to address this difficulty by simplifying the design flow while still leveraging the pessimism-reducing benefits of SSTA. This presentation introduces the DS-OCV concept and details the experiences of Renesas Electronics in applying it in a production design flow.
estimations. In this paper, we describe a methodology and design process for early measure of power directly at the RTL level during the design and verification (D&V) stage. By using an integrated simulation-based approach alongside block-level synthesis, power can be measured and profiled at the block and system level. It further allows D&V teams to optimize a design and conduct various architectural explorations at the system level.
show ever-increasing densities as prices continue to drop. This presentation discusses a case study where the Cadence front-to-back Allegro PCB solution was used in schematic, layout, constraint management, and high-speed analysis. Additional third-party tools were integrated in this process to extend the analysis capabilities of the tools to build a first-turn success. The PCB in this case study is the largest Allegro database on record, with an impressive layer count of 72 layers, multiple 5,000 pin parts, miles of interconnect trace, and power requirements in the thousands of amps—all on a single board.
power chips. Analog products are already supported by ST's SMPS@eDesign Studio free online tool, which was conceived specifically to help in the design and simulation of switch-mode power supply (SMPS) systems. This tool simplifies power supply design and offers designers the flexibility to choose the right products and topologies. A full design is generated from the high-level requirements specified by the user, and provides: all relevant parameters and results; a full and interactive schematic; a full and interactive bill of materials (BOM); and a full set of analysis diagrams.
To improve accuracy without increasing computational complexity, a new IC evaluation platform using macromodels was developed, integrating the online tool with Cadence PSpice OrCad technology. This approach enables the robust and widely diffused Spice OrCAD platform for the simulation of ST's analog and power product families. From now on, designs obtained via SMPS@eDesign Studio can be more accurately simulated within the OrCAD platform using PSpice. PSpice OrCAD is a full-featured, native analog/mixed-signal circuit simulator that is considered the de-facto industry-standard Spice-based simulator for system design.
intrinsic semiconductor noise (thermal, shot, burst, etc.), or by enough digital switching noise to disturb sensitive analog circuits on a mixed-signal design. This paper presents a case study of a customer using the Cadence QRC substrate noise analysis flow. Verification of the results from SNA (substrate noise analysis), a necessary step, requires analysis involving qualification of process profiles, Assura runset implementation, and post-layout simulation. Does the flow just 'magically' work? How do you make sure various techfiles interact with each other correctly? What are the tool limitations? This paper will answer these questions and describe how to successfully integrate these tools into a repeatable, usable process.
complexity only increases at 28nm and beyond. IC design is no longer a monolith; instead, it is a mosaic of processing cores, on-chip memories, and IP blocks from various third-party sources, complicating the chip finishing process at advanced nodes. There are many variants of the chip assembly process: one uses the physical design and editing environment; another assembles chips during place and route.
This paper, however, analyzes a third variant of chip finishing that is gaining popularity. It involves assembling chips in GDSII or OASIS format, where design database sizes range from 10s to 100s of GB and require minor edits to the layout data prior to its handoff to the mask shop. This phase of the design flow requires fast turnaround time, rapid layout viewing of GDSII or OASIS, nimble analysis, flexible layout manipulation, and seamless integration with physical verification tools. We will present advanced node SoC case studies of chip finishing flows that meet time-to-market demands for advanced node designs, and the benefits of using Cadence QuickView and its deep integration with Cadence Physical Verification System (PVS).
and leverage all the benefits that the Cadence environment offers. TowerJazz will present a methodology of such a structure that was defined with the Cadence methodology team. It is the first time that such a methodology is clearly defined with takes advantage of all OA benefits, such as hierarchical structure of digital over the analog definition and clear vias definition in each level. We will define the place of TECHLEF and LEF in the new methodology – needed or not?. In this presentation, we will go over the defined structure and give examples of 3 library levels and the content of each.
jitter, absolute period jitter, and phase noise can be easily determined and plotted after simulation. Since transient noise can impact simulation time, using the Spectre APS High-Performance Option can reduce the simulation time. Results are shown from a PLL design in the TSMC 28nm process.
requires an efficient method to quickly deliver the improvements to the customer. The Cadence PAS tool allows you to generate Pcells based on the GTE file format. This method is both easy to use and efficient. Moreover, it allows you to deal with complex problems such as via filling of polygons, which can save weeks of development time. The possibility of CSV format translation increases the safety of PAS. Most of our benchmark has been done at the 65nm node, but also on 0.35µm in both OA and cdba.
A production delivery has been made in 20nm including some Pcells generated with PAS. As an example, via filling of complex polygons has shown the efficiency of PAS: via filling of 45° rectangle: 4 GTE frames instead of 300 SKILL code lines, and via filling of octagon: 6 GTE frames instead of 500 SKILL code lines. Undeniably, the use of GTE increases productivity for Pcell development. GTE also increases the quality of the SKILL code generated by PAS, because complex functions such as via filling are supported by Cadence technology instead of requiring Pcell developers to make them and maintain them endlessly.
achieve design rule closure while meeting design constraints such as density, timing, power, and signal integrity goals. In addition to the required rules, advanced technology manuals call out a set of recommended design rules that, if satisfied by the layout, improve manufacturability and yield. Recommended rules impose a new optimization target for modern routers because of their soft and opportunistic nature: the router needs to approximate or satisfy the recommended rule by utilizing layout opportunities without compromising either required rules or design constraints.
This paper demonstrates a Cadence NanoRoute-based methodology to implement recommended rules on top of the required design rule closure for the GLOBALFOUNDRIES 28SLP process. It discusses in detail the additions to the rule modeling within a technology file for NanoRoute as well as the necessary NanoRoute flow enhancements. The paper shows the application of this methodology to a set of test cases and compares resulting design metrics as well as runtime impacts for different variants of recommended rule implementation. This information, plus ready templates, will help attendees make an educated choice on the recommended rules methodology.
of fullchip integration and processing is adopted in-house in order to detect any sign-off shortfall. Furthermore, the resulting integrated fullchip netlist can be used by implementation tool such as the Cadence Encounter Digital Implementation System to perform necessary power planning; module sizing and floorplanning; module pin assignment; placement and routing congestion analysis, partitioning for module place-and-route with timing predictability; and top level congestion and timing analysis.
The purpose of floorplanning in-house before ASIC flow handoff enables quality check-off and feasibility of floorplanning that shortens the time and effort spent in vendor´s site. Detail process of the flow will be presented such as top level floorplanning, power, module pin assignments, module partition, placement and routing congestion analysis, top level timing analysis. Specific example from a 45nm technology will be used.
getting them up and running as smooth as possible. In addition to this, adopting a new routing tool for analog users seemed daunting and required many infrastructure changes in the PDKs, impacting schedules. Infrastructure for our PDKs that were once considered "e;state of the art"e; needed maturing to enable our users to leverage the new capabilities.
All of this takes development time, tool testing, support training, user training, and deployment. This paper explains the Analog Layout Automation Road Map at TI. It will detail the reasons that are behind this plan as well as the stumbling blocks that we came across in implementing it.
Windows - Internet Explorer 6 or above, Firefox 2+, Chrome 1+ with Silverlight 2+ or with the Port 25 Windows Media Player Firefox Plugin and Windows Media Player 6.4+. Mac OSX – Safari+, Firefox2+, Chrome with Silverlight 2+.
If you need assistance accessing CDNLive!
ON-Demand content, contact Cadence