Home > Community > Blogs > System Design and Verification > moving past the missing model syndrome
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the Cadence blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Moving Past The Missing Model Syndrome

Comments(4)Filed under: ARM, virtual platform, System Design and Verification, SoC, C-to-Silcon, Fast Models, Models

One of the issues that has hindered the progress of using Virtual Platforms for early software development is missing models. I recall seeing Axys Design's Maxsim tool back around 2001 and thinking how cool it was. All the user had to do was drag and drop models and wire them together to create a working Virtual Platform.  At the time I was working at Axis Systems so we always called Axys "the other Axis".  Axys was eventually acquired by ARM in 2004, but the block diagram editor still exists today in Fast Models from ARM. After the coolness of the demo wore off, I started thinking about the latest and greatest SoC being developed by Samsung, TI, or whoever, and realized that the library to drag-and-drop from probably was missing almost everything needed to create a complete SoC Virtual Platform. I'm sure the ARM CPU was there and probably the memories and a few relatively simple peripherals, but that was about it. Where would the rest of the models come from? Would the Virtual Platform for early software development always suffer from the Missing Model Syndrome?

Over the years it seems a few different approaches have been taken to address the case of the missing models.

The first is to simply brute force as many models as possible to cut down on the number of missing models. This approach is a lot of work, but results in a useful library of models with far more coverage. A drawback of this approach is that SoC designs will always have custom blocks that are the differentiating features of a device (and usually the most complex ones) so there is no way to cover the models for all of the new or proprietary design blocks. Another drawback is that all of the effort is in the model library so the models tend to be closed source or black box models that cannot be modified by the user. The bummer for users is they are now picking tools based on model availability, not based on what they really want to use the models for.

The second approach is to provide a language and tools that enable a user to do the model creation themselves; the "teach a man to fish" approach. Since Virtual Platforms are abstract and start from the programmers view of the device, a programming language or description of what the hardware does can be used. This has the advantage of letting users create models without vendors having to add something to a library. A drawback of this approach is that it relies on the person writing the description to correctly describe the hardware behavior, in most cases from a paper specification. The result can be a model that doesn't behave like the actual device leading to software that works great on the model, but doesn't work on the actual silicon. Of course, a mix of both of these approaches is possible, but neither is ideal.

Could the Missing Model Syndrome be the reason why stand-alone Virtual Platform tools have yet to be able to Cross the Chasm to mainstream users?

One of the nice things about emulation is that when you visit a company interested in using emulation there is never a question about the input that goes into the emulator. The input to emulation is RTL code, and every company has it since it's the starting point for chip implementation. Virtual Platforms are starting to benefit in the same way due to growth in High Level Synthesis tools like C-to-Silicon.

Is the lack of a connection between Virtual Platform creation and hardware implementation a factor in the Missing Model Syndrome? 

Certainly, models (and a fast simulator) are a necessary, but not enough to provide the benefits needed to improve software quality. I really try to control my reaction when I hear people say that a free simulator is good enough, but it's not easy.

Simulation is only the base upon which to build features to enable engineers to do the tasks they need to do in a shorter time with improved quality. I have have said many times, step one is to run, step two is to debug (because software never works the first time), and only on step three do you really get to what you wanted to do in the first place, to verify software quality and tune system performance. Virtual Platforms provide value because they are capable of things that are very hard to do with real hardware. Some examples include:

  • Reliably reproduce hard to find bugs like race conditions
  • Debug without help from JTAG, monitors, or stubs
  • Analyze memory usage to optimally size memories to avoid extra cost
  • Non-intrusively profile software performance to understand where time is spent
  • Understand how software changes influence power
  • Non-intrusively monitor data structures to see if they are adequately sized
  • Find out why a new release of the operating system results in degraded system performance
  • Insert hardware errors or inject artificial function return values to fully test software corner case handling
  • Collect functional coverage on variables and function arguments
  • Confirm performance is sufficient to avoid multimedia issues like choppy audio
  • Experiment with trade-offs between running functions as C code on a processor compared to implementing a hardware co-processor

I'm sure there are many more you can think of. My most interesting Virtual Platform discussions are those that get beyond talking about models and simulators and cover the real issues facing engineers trying to get systems working in a shorter time and with higher quality.

Jason Andrews

 

Comments(4)

By Gary Dare on February 19, 2010
This is a great summary of VP benefits, Jason, and I hope that it will be anonymously posted by coffee machines all around the electronics industry.  I'm bookmarking it! :)
But ... the problem when one makes the decision to pursue their next design, the Missing Model Syndrome comes back.  Part of the answer is ESL, in hand with HLS as you point out, when your high abstraction model of a new IP is done.  Then there is IP reuse, TLM and/or RTL, maybe with an assist if an IP-XACT XML file(s) comes along with the individual part or library.

By Mike Bradley on February 24, 2010
The additional challenge is the maintainence of all these models.  In the old days, you could get away with only having RTL models.  Now we need RTL models and higher level models.  These higher level models are a bit ad-hoc and not standardized.  
Some may use Matlab (M), C++, C, systemC, etc. for these higher level models.  Interoperability of these models is a problem.  Plugging in in different models and mixing abstractions (e.g. part systemC, part RTL, etc.) is also a problem.
Another aspect is the groups that develop the models.  Often a systems group developes high level models to gain insight into performance, power, cost, size, etc.  These models are often not detailed enough to be of much use to the implentation group, so they will re-code models, primarily in RTL.
In Summary virtual prototyping won't be mainstream unti there is a fully integrated flow from high level models to implementable models.  As you hinted at;  High Level Synthesis starts to ease this burdon by allowing an implementable flow from high level models.  In other words, rather than fixing the problem of inter-operability of disparate models, its better to settle on a higher level of abstraction for implementation, so we can go back to having just one model to maintain.
That is, once we have synthesis flows from virtual platform to implementation, the virtual platform (and all other hight level models) will be disconnected from the downstream implementation, and will continue to be a maintainence problem and avoided whenever possible.   --yah think?

By Marc Serughetti on February 25, 2010
The value obatined from using a virtual platform greatly surpasses the cost of modeling it. Companies need to get over this barrier as well as start thinking about the results they would get rather than just the enablement. Real ROI calculation adn experience quickly demonstrate this fact.

Virtual Platforms have a wide usage covering architecture, verification, software development, customer enablement, ... To clearly calculate this ROI, companies must think of virtual platform as an infrastructure serving all these use cases with +/- 20% change to a foundation model. Today too many companies focus on a single subset of a use model, thus misquantifying the true return of using virtual platforms.


By Gary Smith on March 24, 2010
Spot on Jason

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.