Cadence.com will be under maintenance from Friday, Oct. 3rd at 6pm (PST) thru Sunday, Oct 5th at 11pm (PST).
Cadence.com login, registration, community posting and commenting functionalities will be disabled.
Home > Community > Blogs > The Fuller View > cdnlive keynote follow the data says chris rowen
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of The Fuller View blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

CDNLive 2014: Follow the Data to Optimize System Design--Chris Rowen

Comments(2)Filed under: Cadence, Chris Rowen, electronics design, microprocessor design, computer design, ip cores, ip vendors, CDNLive

SANTA CLARA, Calif.--To understand the past, present, and future of electronics system design, follow the data.

That was the message from Cadence Fellow and Tensilica founder Chris Rowen during a keynote presentation at CDNLive Silicon Valley 2014 (March 11) here.
Chris Rowen CDNLive 2014 KeynoteRowen, in a captivating 30-minute speech, took the audience on a soaring, colorful journey from silicon and materials up into the cloud and back down again to define the ways in which data at all levels dictates design:

"If we follow the data, we will be able to follow the applications, follow the energy, follow the cost, and really follow the opportunities to transform people's lives."

In fact, design today is composed of myriad systems within systems, each sharing three attributes: They're data intensive, distributed and energy limited.

Said Rowen:

"It's no longer possible to think of the system as just this chip or that chip. The system that we're all dealing with is, in fact, my device, plus the wireless infrastructure, plus that Google cloud that together make up the environment of interest.

As we watch the data flow from data capture up through some object tracking and augmented-reality application across an LTE network to an LTE base station, through the wired network to a data server with its SSD and HDD and reflecting off the far end and percolating that data back down, you see just how interconnected it is."

Optimizing for Data in the Cloud

Rowen started high up in the atmosphere, the cloud. Since data is only useful when it's computed, aggregated device data is best managed in massive compute farms that can amortize compute and access cycles across vast amounts of incoming data and which are located in areas where energy is affordable.
Rowen next examined servers themselves--a physical scale of a few meters compared with the server farms' scale of thousands of kilometers between themselves and connected devices.

Here, much of the design challenge today involves the shunting of data between processors and storage. And here, we're in the midst of a design transformation that pits hard disk drives against solid state storage. The former is more cost-effective on a cost-per-bit basis, but solid-state memories offer speed and compute efficiencies that HDDs don't. Today, server architects are blending the two for optimal solutions, Rowen noted.

"We see the rise of solid state but not the death of hard disk drive that is driven in part by this geography that says those are relatively high-energy, shared data aggregation devices. They go to the cloud."

Device-Level Optimization

Within servers, down at the SoC level, lie another set of data-driven design challenges; these are also entwined with the intimate relationship of the processor to cache memory. Today, cost and energy concerns proliferate at this level.

"When you look at the whole memory system from Level 1 cache, to Level 2 cache, Level 3 cache and most especially getting off chip to flash and DDR, that's going to easily be 50-75% of energy consumed in many systems. You have to figure out how you amortize that cost, how you manage a design around that fundamental cost."

Contemporary processor architectures want to minimize latency and reduce the number of off-chip references using cache.

"The very nature of a cache is it's guessing," Rowen said. This is good when it comes to handling the uncertainties of applications. But in an era when we design more and more around types of data, this approach can be suboptimal.
Rowen said:

"As we follow the data, we care intensely about the data-intensive tasks where we have some significant opportunities to really handle data in a better, more structured way."

Here, engineers can optimize computing and energy efficiency.

"Part of the conventional wisdom of the SoC world is that dedicated logic is very efficient, and it is. We routinely see data...that shows the energy efficiency -- the MOPS/mW -- can, in dedicated logic, be as much as a thousand times better than doing the same computation on a general-purpose CPU."


But Rowen said this holds true only in specialized cases where little data is being managed because accessing the data -- fetching and decoding -- is costly from an energy standpoint. Performing data-intensive computation in dedicated logic, "you might find that you were spending all your energy doing the same memory accesses that you would do in all these other kinds of processors and you'd lose the advantage," he added.

Three Approaches

So, contemporary SoC design has responded to challenges in three broad ways:

  • CPUs (great for software-intense environments)
  • Programmable data processors
  • Hardwired RTL


Rowen said in current SoC innovation, engineers are often much better off delegating to software and less to specialized hardwired logic.

Here, Rowen called out the notion of dataplane processing and the technological approach that Tensilica (acquired by Cadence in early 2013) was founded on.

The technology allows every architect to select or describe the key attributes--the instruction set, the interface of the processor--and at a high level use a processor generator which creates the complete hardware design and creates the complete software development environment, compilers, debuggers, simulators, RTOS ports, everything needed to instantiate and program so you can generate any set of processors you need, he said.

This puts enormous flexibility into the hands of designers and architects, whether it's the Xtensa customizable processor solution or the pre-configured cores for audio, video, imaging and communications.

"Driving data-centric processing is at the heart of this flexible SoC design method," he said.

Rowen closed, in part, by saying:

"Real systems are deep pipelines of computation from sensor to cloud, so you really need a system view of the energy, the computation, and the application driving it."

 

Brian Fuller

Related stories:

-- ISSCC: Perspectives on System-Design Evolution
-- We Need to Move "Past EDA": Tensilica Founder Rowen
-- CDNLive Silicon Valley Proceedings

 

Comments(2)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.