Home > Community > Blogs > Industry Insights > dac 2012 panel can one system model serve everybody
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

DAC 2012 Panel – Can One System Model Serve Everybody?

Comments(0)Filed under: Industry Insights, ESL, SystemC, High-level Synthesis, HLS, TLM, embedded software, Bailey, models, modeling, software development, TLM 2.0, Schirrmeister, DAC 2012, Goodenough, AT, high-level models, inteconnect analysis, Higgins, modeling panel, LT, Kroll, Swan, system models, hardware architecture, Risacher

Can one system model ever serve the needs of system architects, hardware developers, software developers, and verification teams? Probably not, according to panelists at the Design Automation Conference (DAC 2012) June 5. But panelists had some informative perspectives on the various types of models that are needed, and some suggestions for simplifying what has become a rather chaotic situation.

The panel was titled, "System Models - Does One Size Fit All?" It was organized by Frank Schirrmeister of Cadence and moderated by consultant Brian Bailey, editor of the EE Times EDA DesignLine.  Panelists were as follows, shown left to right in the photos below:

  • Frederic Risacher, senior handheld modeling specialist, Research in Motion (RIM)
  • Richard Higgins, member, Qualcomm
  • John Goodenough, vice president of design technology and automation, ARM
  • Andrea Kroll, senior system methodology specialist, Tensilica
  • Stuart Swan, senior architect, Cadence

Bailey kicked off the panel by noting how complicated modeling has become. At one time there was a single model, a design model. Then the verification team wanted a predictor model, and soon thereafter, a coverage model. Software developers wanted models for early software development, architects had a variety of models, and designers added models for high-level synthesis, prototyping, analog behavior, and other tasks. All these models have complex interconnections and dependencies.

"There is no company out there that has the means to create and maintain all these models," Bailey said. "So choices have to be made, and those choices can significantly impact the capabilities that exist in the flow."

Frederic Risacher - Modeling Hardware We Don't Make

Risacher noted that RIM does not design its own hardware, but rather seeks differentiation through software, which it does develop. Still, the company needs to ensure that hardware meets its specifications. The company has three high-level modeling requirements:

  • Hardware architecture analysis identifies mismatches between silicon and use cases. It requires a mix between performance-accurate models and traffic generators that are available 9 months before silicon.
  • Software architecture analysis investigates the impact of software variants on performance. It needs the capability to model software at high abstraction, as hardware is modeled at a high level.
  • Early software development requires a fast functional simulator to boot OSes. Benchmarking needs to be functionally accurate and cycle approximate.

The downside? "To cover these three domains at RIM, we have three different SystemC environments, unfortunately. And we use four types of models," Risacher said. Most silicon vendors, he noted, provide one type of model - probably not two, certainly not four.

Richard Higgins -- Single Model Across Multiple Abstraction Levels

Whether or not it's practical, Higgins seems to be hoping for a "one size fits all" type of approach. Today, he noted, Qualcomm is using system models pre-silicon as a way to predict performance and "converge" on uncertainty. The challenge is that a complete use case evaluation requires the reuse of validated models with different specifications and technologies.

"ideally we would like a validated, single view of a model containing multiple abstraction levels," Higgins said, "and platforms that are capable of exercising all the abstractions." One approach towards this goal is to create multiple abstractions through what Higgins called "requirements derivation," which involves developing models according to tiered requirements. This yields a family of validated IP models of varying abstractions, and a "snap together" view of a platform that executes multiple abstraction levels.

The other approach is to develop multiple abstractions through refinement. Higgins said this approach raises the abstraction level, unifies annotated and scheduled timing, and makes architectural trade studies possible. It may be enabled by synthesizing SystemC from a SysML executable specification.

John Goodenough - Let's Get Practical

ARM sees a lot of modeling requests from users, but they have to make "pragmatic business sense" if the requests are to be answered in a reasonable time and cost, Goodenough said. ARM has been able to meet the needs of many customers with its Fast Models, which are instruction-set models based on the transaction level modeling (TLM) 2.0 standard. But the challenge is that customer use models vary greatly.

"One thing we can do to make modeling successful is to not have to provide four different models," Goodenough said. "We need to consolidate those into a workable subset of maybe one or two, so we can produce them in a timely manner for the use model, and avoid what happens in most modeling environments that have a lot of hackware and integration scripts to integrate all of these legacy models."

He added that "the panacea of trying to get one size that fits all is not going to happen because of different use models and legacy. But there are some very practical and pragmatic things we can do to help real usage." One is to push for a standard communication, control, and debug interface in models.

Andrea Kroll - Three Model Use Cases

Kroll identified three different use cases for system models:

  • Early embedded software development - develop software before hardware is ready, using instruction set simulator (ISS)
  • Interconnect analysis - software is often unavailable, so traffic generators are used
  • Multi-core system performance validation - software and hardware architecture are available, detailed hardware is not

"In my opinion, one size fits all is far too complicated for the user," Kroll said. "You really need to look at what your situation is and provide the right level of abstraction. Find the abstraction you need, instead of demanding cycle accuracy and a very fast model!"

Stuart Swan - What's the ROI?

Does one system-level model serve all purposes? "Pretty obviously no," said Swan. "There are always tradeoffs between performance and accuracy, and then there's model availability." Does that mean the more types of models you have, the better? "No!" was his answer.

Swan's view is that we need to look at modeling from a business perspective and consider where to get the greatest return on investment (ROI). After all, he noted, modeling takes a lot of time, money and people. If it doesn't solve a painful problem, it's probably not worth a major modeling effort. The same may be true if a model isn't reusable.

Some models have clearly shown ROI, Swan said. These include RTL models, TLM 2.0 LT (loosely timed) models, pure C/C++ functional models, and increasingly, high-level synthesis models. Some have poor ROI, including TLM 2.0 AT (approximately timed) models. "Some companies have tried and made significant investments, but have given up because it's too difficult and expensive," Swan said.

Standards will help improve the modeling situation, Swan said. He noted that Cadence and STMicroelectronics are proposing some TLM 2.0 extensions at the North America SystemC Users Group (NASCUG) at this year's DAC. But the fundamental question is how to develop fewer types of models. One interesting prospect, he said, is to leverage a high-level model from a virtual platform all the way to high-level synthesis, RTL and silicon.  

Brian Bailey - a Concluding Remark

"In the past 5-10 years as this whole notion of ESL [electronic system level] has been coalescing, many people have said that models are the key to making it work. Now, through experimenting, we finally understand what we need to go forward. But I think it's clear from these presentations that the industry is still not aligned on what is needed and how this is going to fit together," Bailey said.

Indeed, much progress has been made on the system modeling front - but there's still quite a journey ahead.

Richard Goering

 

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.