Cadence.com will be under maintenance from Friday, Oct. 3rd at 6pm (PST) thru Sunday, Oct 5th at 11pm (PST).
Cadence.com login, registration, community posting and commenting functionalities will be disabled.
Home > Community > Blogs > The Fuller View > embedded vision s transformative potential
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of The Fuller View blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Embedded Vision's Transformative Potential

Comments(2)Filed under: Tensilica, Brian Fuller, Internet of Things, electronics design, semiconductor design, Fuller View, ip cores, embedded systems design, embedded systems software, image sensors, embedded vision, Jeff Bier, Embedded Vision Alliance

Embedded vision, a "once-in-a-generation" technology, is a enormous and lucrative opportunity for the semiconductor ecosystem, but compute and system-design issues post challenges in the short term.

That was the assessment of Jeff Bier, engineer and founder of the Embedded Vision Alliance, who spoke to Cadence employees during a recent visit.

Embedded Vision Alliance Jeff Bier

"Embedded vision is...one of these once-in-a-generation technologies," Bier (pictured, right) said in a presentation covering embedded vision technologies and their challenges and opportunities. "The closest parallel I can draw is wireless. Wireless is a huge industry that touches almost every aspect of technology."

The technology is poised to improve safety, boost efficiency and productivity, and simplify usability, he added.

Embedded vision's promise

For Bier, it's clear that embedded vision's time is now because processing technology has crossed an important threshold. He held up as an example Texas Instruments' C6000 single-core DSP, designed for high-performance, streaming, media-intensive applications but also optimized for cost and energy efficiency.

This device has recently crossed the point at which it can perform 10 billion multiply-accumulates a second. That's key, according to Bier, because that performance level is a typical compute requirement for a vision application.

Said Bier:

"This (threshold-crossing) is really, in my mind, the key reason...vision is transforming from something you'd find (just) in factories with half-million-dollar systems doing manufacturing and quality control to something that's going to be in your living room, in your car, in the store ... it's going to be everywhere."

Available markets, Bier argued, will quickly expand from established areas such as factory automation, mil/aero, and video-game consoles to building automation, robots, education, healthcare, field service, and other areas.

Technical challenges

But first some technology hurdles need to be reckoned with, Bier said, noting "these are very hard problems algorithmically."

"The fundamental reason vision is hard is the inputs are infinitely varying," he said.

He described at an abstracted level a typical "feed-forward" system in which a challenge inversion occurs: At the initial stages right after image capture, the embedded-vision system tends to focus on extremely high data rate execution as the system processes every pixel and every color component of every pixel. So there's a huge amount of data being processed very quickly but using relatively simple algorithms to perform tasks such as correcting image distortion caused by imperfect lenses, Bier said.

So at the front end, the system is handling perhaps 8-10 math operations per data item but tens or hundreds of millions of data items per second. That "huge" amount of computing quickly pushes the processor up to billions of operations per second, he added.

The inversion occurs as you move down that feed-forward pipeline. Farther down, the data rates fall "radically, down to thousands of data items per second" but the algorithm complexity soars, from "tens of lines of code to hundreds of thousands of lines of code," Bier said.

Why? Because the algorithms are written at that point to help the system understand interesting and tricky features of the landscape. For example, is what the sensor is seeing a traffic lane or a seam in the road pavement?

He said:

"So the data rates go down as we get toward the end of the pipeline, but the algorithm complexity goes way, way up. As you go toward the end of the line, you've got heuristics and machine learning and millions of lines of code operating at very low data rates." 

Additional challenges

Additionally the embedded vision work load is very heterogenous, with different kinds of algorithms often operating on different data types. That requires a heterogenous processor to implemement it, Bier said.

There is, he added, a sort-of land grab in fact for how to process these systems. Approaches include:

  • High-performance embedded CPUs
  • ASSP + CPU
  • GPU + CPU
  • Vision processor + CPU
  • Mobile apps processor
  • FPGA + CPU

Bier noted that the heterogenous approach is driven in part by CPU limitations: CPUs, while easy to use, often run out of performance and memory bandwidth in these streaming-intensive applications.

He added that companies such as Tensilica, which Cadence acquired in the spring, are stepping in finely tuned DSP cores to take advantage of the growing market.

For Bier, embedded vision is clearly the Wild West, with opportunity for many. Standards are unsettled (and may never be); the ecosystem is just forming (which his Embedded Vision Alliance is helping facilitate), and the demand for vision solutions has tremendous potential.

"It's a huge opportunity for companies in the semiconductor industry," Bier said.

Brian Fuller

Related stories:

-- Embedded World 2013: Virtual Platforms Connected to Everything

-- Interview with Lip-bu Tan, Part 2: Energizing the Electronics Industry

 

Comments(2)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.