Home > Community > Blogs > Functional Verification > what does systemc mean for design and verification
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Functional Verification blog (individual posts).


* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

What Does SystemC Mean for Design and Verification?

Comments(2)Filed under: Functional Verification, SystemC, IES-XL, TLM, VSP, Virtual System Platform, uvm world, C-to-Silicon, Incisive Enterprise SimulatorMy colleague Jack Erickson recently published in the Cadence System Design and Verification Community a blog post entitled "IP Cannot Be an Efficient Abstraction Level without SystemC!" When I saw the title, my immediate reaction was to write a complementary post called "SystemC Cannot Be an Efficient Abstraction Level without IP!" This caused me to think some about the industry momentum toward using SystemC rather than traditional RTL as a design language.  I chose a more general title because there are three key points I want to hit.

My first comment is that I agree with Jack's conclusions. Because C-to-Silicon Compiler's high-level synthesis can transform the same design description in different ways for different applications, SystemC IP is inherently more reusable that RTL IP. I also agree that SystemC can deliver a significantly higher level of abstraction than RTL. Sure, it's possible to write a SystemC design description that's nothing more than RTL in another language, but with training designers can learn how to write untimed, high-level code that enables boosts in abstraction and reusability.

Given the advantages of SystemC-based design, why is it not yet universally adopted? I believe that it's valid to draw a comparison with the rise of RTL-based design in the early 90s. I was a pioneer in that transition, taping out in 1989 what I believe was only the second chip anywhere using a commercial logic-synthesis tool. RTL for simulation and modeling had been around for a number of years previously, but the availability of logic synthesis was the key driver for RTL replacing gate-level schematics for design input.

There were other enabling technologies, including RTL-to-gates equivalence checking, RTL-based design rule checkers, and the availability of commercial RTL design IP. Being able to license proven design IP for a wide array of standard interfaces was a good reason to move to RTL if not already there. Even "star IP" providers such as ARM began offering RTL versions of their cores. My second main point, and the complement to Jack's title, is that the availability of SystemC design IP will be a strong incentive for designers to move up from RTL.

I say "will be" because I don't see a lot of SystemC design IP out there yet. I searched the ChipEstimate site for the keyword "SystemC" and found only a half-dozen listings, several of which appear to be RTL designs with SystemC simulation models. I have little doubt that this will change; logic synthesis was around for several years before the RTL-based IP industry made a significant impact. I expect a similar "chicken and egg" effect with the adoption of C-to-Silicon Compiler and the availability of SystemC design IP.

My final topic is what the transition from RTL to SystemC design means for my world of functional verification. Today, many SystemC designers perform the bulk of their verification at the RTL level, using the output of high-level synthesis. Again, there is a clear comparison with the early days of RTL design, when designers still ran lots of gate-level simulation. This changed over time, and likewise I expect that verification will become more and more centered on the SystemC design description.

Cadence has done a lot of work to ensure that this transition will be painless for our customers, including:

  • Extending the Universal Verification Methodology (UVM) to include SystemC models, including SystemC verification IP (VIP) components
  • Ensuring that the same UVM testbenches running in Incisive Enterprise Simulator can be used for both SystemC and RTL designs
  • Using the same underlying technology in SystemC testbench simulation and in the Cadence Virtual System Platform to ensure consistent behavior

I'll defer to my colleague Jack to forecast the industry's move from RTL to SystemC design in more detail, but it's clear to me that this is happening and that it has a lot in common with the gates-to-RTL transition. EDA vendors worked hard to ensure an easy path for their customers back then, and we're equally committed to evolving our tools and methodologies today for customer success. I'd love to hear from you about SystemC-based design. Are you doing it? Considering it? If not, why not? What can we do to help? I look forward to your comments.

Tom A.

The truth is out there...sometimes it's in a blog  


By Sandeep Sathe on August 30, 2011
I would like to know more about how the untimed SystemC model  will be converted to Netlist ? More specifically, where would the HLS tool put flops so that the timing is met ? How about advanced timing features like pipelining ? How about supporting DDR ?

By Jack Erickson on September 13, 2011
Good questions Sandeep.   High-level synthesis tools, such as C-to-Silicon Compiler raise the abstraction of your input code, so you don't need to worry about where to insert flops.  C-to-Silicon will then analyze all paths to maximize the available time, and squeeze as much logic as possible into each path.  C-to-Silicon does this analysis by using the Cadence RTL Compiler logic synthesis tool under the hood, so it can accurately time each path, and ensure that the generated RTL will predictably close timing.

Pipelining is another big advantage in using C-to-Silicon since you don't need to hardcode any of the pipeline structures in the input SystemC;  just focus on the algorithm and let C-to-Silicon implement it in a varying number of pipeline stages to compare overall area, latency and throughput.  Then if you want to increase the clock frequency, simply re-run the pipeline command to achieve a different number of stages to meet timing.

Regarding DDR memories, yes C-to-Silicon supports any memories that memories compilers generate, and does not care about internal memory implementations.

Leave a Comment

E-mail (will not be published)
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.