Cadence.com will be under maintenance from Friday, Oct. 3rd at 6pm (PST) thru Sunday, Oct 5th at 11pm (PST).
Cadence.com login, registration, community posting and commenting functionalities will be disabled.
Home > Community > Blogs > The Fuller View > dac 2014 computer vision coming but requires engineering flexibility creativity
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of The Fuller View blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

DAC 2014: Computer Vision Coming but Requires Engineering Flexibility, Creativity

Comments(0)Filed under: design automation, Chris Rowen, NVidia, semiconductor design, IC design, Intel, embedded vision, DAC 2014, computer vision

Design and architecture flexibility are vital if the electronics industry is to deliver useful, accessible, and affordable computer-vision applications in the coming years, according to a panel at 2014 DAC. 

The June 5 panel, titled "Hardware-Software Codesign for Computer Vision: Can We Make a Computer See?", saw panelists spar over certain elements of implementation, but they generally agreed that issues with transistor and power scaling and software implementation demand new approaches. 

"How do we cope with the computational intensity challenge and the creative intensity of the space?" Cadence Fellow Chris Rowen (pictured, right) asked during the June 5 afternoon session. "That's going to influence everything that lands in silicon."

Computer vision is a space he dubbed the "Wild West" because of its lack of standards, and panelists agreed that a variety of approaches is required since the applications will vary widely.

Computer vision learning

Stanford Professor Andrew Ng described the challenge by relating the Google Brain project on which he worked several years ago. The team used 16,000 CPUs to train the system to identify faces by watching YouTube videos, a process called supervised data analysis. The system eventually identified images of cats.

"The remarkable thing about this was it had discovered the concept of a cat by itself," Ng noted. "These large deep learning algorithms are driving substantial economic value."

Ng said the "hunger for bigger systems continues" as engineers tackle larger computer-vision problems. But he cautioned against pursuing computing solutions for the sake of computing, even where it might be put up against thornier challenges like unsupervised data analysis (in other words not training a system at first to look for certain clues to piece together a larger image of, say, a human face or a cat).

Ng, who is also co-chief scientist at Baidu, said:

"Hardware groups are building systems that can simulate a trillion connections. Those are good supercomputing results but the relevance to Baidu, Google, Facebook, Microsoft, or Apple is non-existent." 

For computer vision, Ng sees a major shift in the coming years away from supervised learning approaches to learning from untagged data (unsupervised), and this will require different approached to electronics systems design.

"The shift to unsupervised algorithms has discovered the concept of a cat," he said. "Hardware will need to flexible and programmable because we really don't know what the algorithm needs to be. As a society, we have access to more unlabeled data than labeled data."

Panel moderator Yankin Tanurhan of Synopsys, however, expressed some skepticism, noting that RAF analysts in World War II identified V-1 rockets from grainy pictures without, at least at first, knowing what a V-1 was. Would a computer vision system be as successful in the same situation? "I have my doubts about this so-called learning effect," he said.

Michael B. Taylor, a professor with the University of San Diego's computer science and engineering department, said another challenge is the scaling of transistors and energy efficiency—the problem of so-called dark silicon as a function of the breakdown of Dennard scaling.  

"We have exponentially more transistors (but) we can't switch them and can't use them for computation," Taylor said. "But we can use them for memory."

"There's a pretty interesting result that's coming along, which is that we're getting to the point where we're not going to have to store all of our video off chip," Taylor said, noting that those otherwise "dark silicon" transistors used as memory instead of logic will enable that. "We're actually going to be able to fit a lot of it on chip. And that's going to really help us with efficiency." 

Expectations rising

Falling pixel-processing costs and early system success in the marketplace have created big expectations, and this puts all the more pressure on engineering teams, Rowen noted.

He said:

"We don't lack for problems to solve. What we lack are the combination of architectures which have the kind of efficiency that allow them to be deployed in mass quantities. It's all good to say I have 1000 servers and 16,000 processors, but I don't want to do that on my wristwatch." 

He urged the audience to think about the problem along three axes:

  • What's the arc of the computation nodes? Is it hardwired data path? A general-purpose processor? Something in-between?
  • How do we interconnect blocks with imaging and vision DSPs and how do they all talk to memory?
  • What algorithms and programming models should be used to get things done. Open VX, he noted, is a "big step" in the right direction.

EDA to the rescue?

Since it was an EDA panel, the question of what should be design automation's role arose.

Jason Clemons, research scientist with NVidia, said: 

"We need a way to explore the design space. Give us utilities to allow us to play around with basic components and performance metrics that allow us to evaluated beyond ... area and power."

Intel Fellow Doug Carmean urged the audience to think broadly about solving computer vision problems because it's not so much about computer vision, per se, as it is about computer understanding: 

"One of the things you've heard as a theme is people wondering ‘are there fixed-function units or ... general-purpose units?' It's not an ‘or;' it's an ‘and.' The chips of the future that we will be designing will have general-purpose functionality, will have special-purpose functionality... they'll have filters, DSPs, they'll be programmable, they'll be configurable. That leads us to computers that can actually understand."

 

Brian Fuller

Related stories

- DAC 2014 Keynote: EDA Can Tap Into New Revenue Streams

- DAC 2014 Dual Keynote: How Automobiles Are Getting Smarter

- Embedded Vision Summit: Focus on Autonomy and Recognition

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.