Home > Community > Blogs > System Design and Verification > combating system level design confusion
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the System Design and Verification blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Combating System-Level Design Confusion

Comments(1)Filed under: ESL, embedded software, SystemC, TLM, System Design and Verification, virtual platforms, Gary Smith, virtual prototypes, C++, System-Level Design, architects workbench, architectural, software virtual prototype, silicon virtual prototype

I would like to add my thanks to Gary Smith for his short "Industry Note" titled "ESL Behavioral Design" that I first saw in a post by Steve Leibson. Yes, the note is pretty short and topic is pretty broad, but the diagram and definitions of Silicon Virtual Prototype (SVP) and Software Virtual Prototype (SWVP) are a big help by themselves. System-level design is complex because it involves a number of different use cases that have different goals and requirements, but are closely related. At DVCon 2011 John Aynsley listed some virtual platform use cases:

  • Software Development
  • Software Performance Analysis
  • Architectural Analysis
  • Hardware Verification

The use cases also go by multiple names that are not always obvious, especially when software starts mixing with architectural analysis. This makes for some muddy water as companies try to find the best way to achieve results.

As I think back, there were many times I wish I would have had Gary's simple diagram. In fact, I'm thinking I should make a laminated version and carry it in my backpack or in my pocket. Not because it's revolutionary, but showing it to somebody and asking them which of the three ovals they are talking about would save time and avoid confusion. I'm also hoping the diagram will help people realize they can't just say, "I want to build something that does everything", effectively lumping all 3 ovals into one.

Below are some examples. Everything below is fiction, but based on things that actually happen all of the time. Also, don't try to read in details about what Cadence products may or may not be good at since the scenarios are not about Cadence products.

 

Scenario 1

Customer X calls to talk about "Virtual Platforms" (but doesn't say what kind because they don't have the diagram!). I give an introduction about virtual platforms for software development along with some details about how to tune system performance by analyzing the software stack running on the hardware. Then the questions start:

Engineer X: This is great, our products run Android on the application processor and we would like to tune system performance. Can you provide metrics about TLM 2 transaction counts and data throughput?

Me: Yes, this can be done with no changes to existing models.

Engineer X: Is there a lot of performance overhead required to keep track of these metrics?

Me: No, but if you want to run an operating system like Linux then TLM 2 DMI (direct memory interface) is the only way to get high enough performance. Android runs billions of instructions before you can make a phone call and simulating every instruction fetch with a blocking transport call takes time.

Engineer X: So can I use a SystemC virtual platform to confirm the architecture of my memory controller? (with DMI enabled)

Me: No, with DMI the CPU model gets a pointer to memory and you don't see any memory accesses. There is no way to keep track of any memory access related metrics for memory regions with DMI enabled. You can dynamically turn DMI on and off and take some detailed measurements.

Engineer X: OK, I think I need to use Approximately Timed models.

Me: Sure, you can get some metrics from Loosely Timed (LT) models using blocking transport, but for detailed performance analysis it would be better to use Approximately Timed (AT) models with non-blocking transport.

Engineer X: Can I mix LT, AT, and RTL models in your simulator?

Me: Yes.

Engineer X: Can I boot Linux on a virtual platform with an AT model for my memory controller and all other components as LT models?

Me: Probably not, unless you have a lot of patience. There are a lot of instruction fetches from memory that would dominate simulation time.

Engineer X: This doesn't seem to be helping me, can I put SystemC models into an FPGA?

Me: Maybe, we have C-to-Silicon compiler that does this, but why?

Engineer X: So they run faster.

Me: But then you can't observe anything and it's impossible to measure any performance.

Engineer X: I guess we should get the RTL for the memory controller.

Me: That may come later, but I thought you wanted to confirm the architecture of the memory controller was correct.

Engineer X: Yes, that's true, but at least it would be cycle accurate.

Me: So much for an abstract model used to run Android.

 

Scenario 2:

Customer Y asks some questions about SystemC Virtual Platforms.

Engineer Y: How fast can virtual platform processor models run?

Me: Somewhere in the tens or hundreds of MIPS range. There is not a lot of published data on CPU models themselves, but http://www.ovpworld.org/ is a good place to see some actual metrics.

Engineer Y: Do I have to run software on a virtual platform?

Me: I'm not sure what you mean, but it depends on what you are trying to achieve. Most virtual platforms are made for the purpose of running software.

Engineer Y: The first thing I usually do is remove the processors from the system when I start constructing my simulation environment and use traffic generators.

Me: OK, I guess multi-core embedded software debugging is not interesting to you.

 

Scenario 3:

Somebody makes a random comment such as:

Don't add any product features for architectural analysis, the number of architects in the world is very small and there is not a big enough market to justify the cost.

 

Scenario 4:

After explaining the meeting with Customer X, the following discussion occurs:

Fred: Virtual platform vendor A seems to have a good product, but it's only for architectural analysis. It can simulate multiple instances of large systems very efficiently.

Me: Why do you think it's only for architectural analysis?

Fred: Because the models are very abstract, just enough to determine traffic flow in the system.

Me: Does it include processor models running target software?

Fred: Yes

Me: How does this relate to engineer X who wanted to do architectural analysis on his memory controller with AT or cycle accurate RTL?

Fred: There are different types of architectural analysis.

Me: Oh.

These are just a few examples to demonstrate how Gary's diagram would have made things easier. I spend most of my time in the SWVP area, but it crosses over into SVP all the time. I'm sure the same is true for people who spend most of their time in the SVP area, they start crossing into the software area. The first step for any company is to determine the goals and start by trying to solve a specific problem or answer a specific question. Many companies have been able to avoid confusion by segmenting the use models and doing a good job at each one. More advanced companies are finding efficiencies in the overlap and connect the use cases without being crippled by conflicting requirements and goals.

Feel free to share your experiences in trying to discuss the 3 types of virtual platforms.

Jason Andrews

Comments(1)

By LarryL on April 13, 2011
Jason,
Nice column, and thanks for the link to the Open Virtual Platforms (OVP) website.  
The thing is, if you showed Gary's diagram to someone, and asked them which of the three areas they were talking about, as often as not in our experience, the answer is "Yes".  They are interested in all three.  Then we start asking about priorities, and specific issues that need solving in the different areas, and still no resolution is achieved.  
There is still a significant effort needed, not so much on what the models are, or the uses of those models, but on how best to implement various methodologies that take advantage of those different virtual platforms.  And that is going to have to be a collaborative effort between tool developers and users.  
Larry

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.