Cadence.com will be under maintenance from Friday, Oct. 3rd at 6pm (PST) thru Sunday, Oct 5th at 11pm (PST).
Cadence.com login, registration, community posting and commenting functionalities will be disabled.
Home > Community > Blogs > System Design and Verification > high level design and verification how can we finally move on from the forrest gump era
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the System Design and Verification blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

High-Level Design and Verification: How Can We Finally Move on From the Forrest Gump Era?

Comments(0)Filed under: C-to-Silicon Compiler, verification, ESL, SystemC, RTL, TLM, DAC, System Design & Verification, high level synthesis, hls, SoC, C++, high-level verification, high-level design, DAC panel, Forrest Gump

Richard Goering wrote an excellent summary of the DAC panel "High Level Synthesis Deployment: Are We Ready?," which can be found here.

His conclusion is that we are getting close, and one of the biggest hurdles still to overcome is the skill set -- the combination of hardware design expertise and C++ -- which represents an opportunity for engineers seeking a new or better career.

That is a very insightful conclusion, given what we see in terms of drivers. Both Eli Singerman of Intel and Mark Johnstone of Freescale pointed out that they cannot accomplish what they need to in terms of design productivity and complexity utilizing RTL-based flows. Referring back to Clem Meas' introduction slide, we are still designing chips using the methodology that came into vogue with Netscape Navigator and Forrest Gump.

We are long overdue for a better way, and it should have become available roughly at the same time as Harry Potter: Prisoner of Azkaban. Instead, in order to get chips out the door, companies have come to rely on buying off-the-shelf IP, often from low-cost design houses since the IP offers them no differentiation. Some of our C-to-Silicon Compiler customers tell us that they adopted HLS in order to be able to again differentiate via hardware. But if they cannot find engineers capable of using this methodology, the tools have little value.

There were other obstacles noted by panelists, some of which are solved by tools like C-to-Silicon Compiler. These include QoR comparable to hand-written RTL, an ECO flow,and use of a standard language input. But Eli Singerman pointed out that verification techniques are still behind. Yet verification is one of the biggest benefits of moving up in abstraction. Mark Johnstone described a recent project where he had to design a transcendental function with 7 cycles of latency. He designed it with SystemC and was able to run 10M inputs in 15s. So that's 70M cycles in 15 seconds. He was then able to run 4 billion patterns of random stimulus in 30 minutes.  "I simply could not have done this much verification and found this many bugs as fast with RTL simulation," he said.

So the potential is there, but we need to extend today's mature metric-driven methodologies up to take advantage of the benefits of higher abstraction. Fortunately this effort is in progress, but it is vital to mature it so it can be adopted broadly.

The other big impediment to faster adoption of HLS is the amount of legacy RTL. Blocks that are designed using SystemC/TLM will see the productivity and runtime improvements, but they must be integrated with all the RTL IP in a SoC so full-chip verification is still time-consuming. And the RTL IP is already proven and verified at the unit level, so creating a SystemC/TLM version creates overhead and risk. Most companies are taking the evolutionary approach of only targeting new IP for SystemC/TLM. This makes sense, but given the amount of hardware that is re-used, it slows the overall adoption of HLS.

This last issue is systemic -- there may not be a solution other than time. Of course if the first issue is solved, and we can create more designers that are proficient in this methodology, we may find that there is actually a larger demand for new IP than what we think. Today we only see companies design new IP when they absolutely have to. Perhaps when the supply of HLS-capable designers is sufficient, companies will be more willing to fund new IP development.

So the challenge comes back to developing C++/SystemC/HLS expertise amongst hardware designers. There are training courses offered by companies like Duolos, which do a good job of education. But these courses have been available for a while. The tools are available, and the methodologies are beginning to mature. 

The conclusions drawn by Johnstone and Singerman were that we are capable of being ready for deployment, but right now we are really ready for pilot projects first. So I ask -- what else is needed in order to help this take off? I'd love to see ideas in the comments!

Jack Erickson

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.