SANTA CLARA, Calif.--To understand the past,
present, and future of electronics system design, follow the data.
That was the message from Cadence Fellow and
Tensilica founder Chris Rowen during a keynote presentation at CDNLive Silicon Valley 2014 (March 11) here.
Rowen, in a captivating 30-minute speech, took
the audience on a soaring, colorful journey from silicon and materials up into
the cloud and back down again to define the ways in which data at all levels
we follow the data, we will be able to follow the applications, follow the
energy, follow the cost, and really follow the opportunities to transform
In fact, design
today is composed of myriad systems within systems, each sharing three
attributes: They're data intensive, distributed and energy limited.
"It's no longer possible to think of
the system as just this chip or that chip. The system that we're all dealing
with is, in fact, my device, plus the wireless infrastructure, plus that Google
cloud that together make up the environment of interest.
As we watch the data flow from data
capture up through some object tracking and augmented-reality application
across an LTE network to an LTE base station, through the wired network to a
data server with its SSD and HDD and reflecting off the far end and percolating
that data back down, you see just how interconnected it is."
Optimizing for Data in the Cloud
Rowen started high
up in the atmosphere, the cloud. Since data is only useful when it's computed,
aggregated device data is best managed in massive compute farms that can
amortize compute and access cycles across vast amounts of incoming data and
which are located in areas where energy is affordable.
Rowen next examined servers themselves--a
physical scale of a few meters compared with the server farms' scale of
thousands of kilometers between themselves and connected devices.
Here, much of the
design challenge today involves the shunting of data between processors and
storage. And here, we're in the midst of a design transformation that pits hard
disk drives against solid state storage. The former is more cost-effective
on a cost-per-bit basis, but solid-state memories offer speed and compute
efficiencies that HDDs don't. Today, server architects are blending the two for
optimal solutions, Rowen noted.
"We see the rise of solid state but
not the death of hard disk drive that is driven in part by this geography that
says those are relatively high-energy, shared data aggregation devices. They go
to the cloud."
Within servers, down
at the SoC level, lie another set of data-driven design challenges; these are also
entwined with the intimate relationship of the processor to cache memory.
Today, cost and energy concerns proliferate at this level.
"When you look at the whole memory
system from Level 1 cache, to Level 2 cache, Level 3 cache and most especially
getting off chip to flash and DDR, that's going to easily be 50-75% of energy
consumed in many systems. You have to figure out how you amortize that cost, how
you manage a design around that fundamental cost."
Contemporary processor architectures want to
minimize latency and reduce the number of off-chip references using cache.
nature of a cache is it's guessing," Rowen said. This is good when it comes to handling the
uncertainties of applications. But in an era when we design more and more
around types of data, this approach can be suboptimal.
"As we follow the data, we care
intensely about the data-intensive tasks where we have some significant
opportunities to really handle data in a better, more structured way."
Here, engineers can
optimize computing and energy efficiency.
"Part of the conventional wisdom of
the SoC world is that dedicated logic is very efficient, and it is. We
routinely see data...that shows the energy efficiency -- the MOPS/mW -- can, in
dedicated logic, be as much as a thousand times better than doing the same
computation on a general-purpose CPU."
But Rowen said this holds true only in
specialized cases where little data is being managed because accessing the data
-- fetching and decoding -- is costly from an energy standpoint. Performing data-intensive computation in
dedicated logic, "you might find that you were spending all your energy doing
the same memory accesses that you would do in all these other kinds of
processors and you'd lose the advantage," he added.
So, contemporary SoC
design has responded to challenges in three broad ways:
- CPUs (great for software-intense
- Programmable data processors
- Hardwired RTL
Rowen said in current SoC innovation, engineers
are often much better off delegating to software and less to specialized
Here, Rowen called out the notion of dataplane
processing and the technological approach that Tensilica (acquired by Cadence
in early 2013) was founded on.
allows every architect to select or describe the key attributes--the
instruction set, the interface of the processor--and at a high level use a
processor generator which creates the complete hardware design and creates the
complete software development environment, compilers, debuggers, simulators,
RTOS ports, everything needed to instantiate and program so you can generate
any set of processors you need, he said.
This puts enormous flexibility into the hands of designers and architects,
whether it's the Xtensa customizable processor solution or the pre-configured
cores for audio, video, imaging and communications.
"Driving data-centric processing is at
the heart of this flexible SoC design method," he said.
Rowen closed, in part, by saying:
"Real systems are deep pipelines of
computation from sensor to cloud, so you really need a system view of the
energy, the computation, and the application driving it."
-- ISSCC: Perspectives on System-Design Evolution
-- We Need to Move "Past EDA": Tensilica Founder
-- CDNLive Silicon Valley Proceedings