Home > Community > Blogs > Industry Insights > ip talks keynote at dac 2014 rethinking image processing in soc design
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Industry Insights blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

IP Talks! Keynote at DAC 2014—Rethinking Image Processing in SoC Design

Comments(0)Filed under: ChipEstimate.com, semiconductor IP, IP Talks, DAC 2014, Imagination Technologies, ISP, McGuinness

Many systems on chip (SoCs) have a "camera block" or image signal processor (ISP) that takes raw data from an image sensor and manipulates that data. But ISPs are moving away from their traditional role and turning into "vision subsystems," according to Peter McGuinness, director of multimedia technology marketing at semiconductor IP provider Imagination Technologies.

McGuinness was the keynote speaker at IP Talks!, a three-day program of presentations at the ChipEstimate.com booth at the recent Design Automation Conference (DAC 2014). His half-hour speech was titled "Visuals to Vision: The Changing Role of the Image Sensor." A video is available of this and other IP Talks! presentations (log-in required, quick registration if you don't have one).

 

Peter McGuinness of Imagination Technologies presents at IP Talks! at DAC 2014

McGuinness first looked at the traditional role of the ISP. It has a set of familiar functions—it takes raw image data from the sensor, and manipulates it to fix defects in the sensor or in the CMOS process. It then uses both hardware and software to produce a good image. "That's the classical role of the ISP, but it's changing," McGuinness said. "It's moving away from producing images and becoming a vision subsystem." As such, he explained, the image sensor is now a source of data for processing later in the pipeline.

Distributing a workload

As McGuinness noted, applications for imaging are expanding rapidly—imaging is no longer just a question of producing nice videos or photos. Automotive electronics provides a good example. Here, imaging is (or will be) used to help drivers back up, avoid collisions, stay in their lane, and recognize street signs. Imaging also has new applications in retail sales, where a store may have an unattended kiosk and cameras may be used for facial recognition.

Applications such as these use workloads with a lot of data parallelism. That means you can take a workload and distribute it in the system, making use of GPUs and CPUs as well as ISPs. The result is better performance for a given power envelope. As McGuinness noted, vision software is heterogeneous, and intelligently combining heterogeneous compute resources enables the most differentiation at the lowest cost.

So what to run where? A CPU is good for non-parallel, serial code that has a lot of branching, McGuinness said. Typically this code will be single-threaded or have a low number of threads. The code needs only small amounts of data but can make critical decisions. A GPU, in contrast, is good for very large data sets with parallel operations. Parallelism occurs not only because of wide-word data sets, but because many operations are related to adjacent pixels.

Envisioning a "vision system"

A typical camera subsystem today will have a separate CMOS image sensor chip. An imaging pipeline will take the raw data and output YUV data that can be sent to an applications processor on the SoC. However, McGuinness observed, about all you can do with that YUV data is operations like autofocus and white balance. "You've really removed a lot of information," he said. "There is an opportunity to produce information from the raw data that is usable in a vision system."

An "integrated" vision system, in contrast, does not wait for YUV data—it takes in raw data from the CMOS sensor. The ISP pipeline performs traditional operations like bi-pixel fixing, tone mapping, and correction for lens distortion. However, functions like focus statistics, white balance statistics, and exposure statistics are not handled in the ISP itself but are sent over to main memory, where they can be picked up by the CPU or the GPU depending on the amount of data involved.

The ISP is moving on to the chip, but is also changing in other ways. "The ISP is collaborating with the other compute resources that are on the SoC, and that changes the nature of the game," McGuinness said. "It means you can do customized things in software that you could not do earlier when the imager was on a separate chip and you were just presented with a finished image."

Enabling new functionality

McGuinness identified two "areas of function" that were not available in the traditional camera pipeline. One is computational photography, which makes it possible to take a number of different images and interpolate to produce a single, improved image. Another is the use of camera arrays to provide different viewpoints and to manipulate depth of field. "Essentially, you can create the picture after you've taken the raw image that needs to be processed."

McGuinness briefly described the Imagination Technologies PowerVR Heterogeneous Vision Platform and the Raptor camera ISP that it uses. The platform can include CPUs, GPUs, and video encoders. To reduce power, McGuinness observed, Imagination worked hard to reduce the number of memory transactions by allowing the Raptor ISP to send images directly to the video encoder. This kind of optimization, he said, shows that Imagination is "really a systems company, not just an IP block company."

To view the video replay of the McGuinness keynote, click here

To see a listing of all the available videos from IP Talks! 2014, click here. You will be asked to log in or register. Once logged in, you can also view IP Talks! 2014 video presentations from speakers from ADICSYS, Argon Design, ARM, Cadence, eSilicon, Ferric Semiconductor, GLOBALFOUNDRIES, Methodics, Mixel, Open-Silicon, Sidense, Silab Technologies, Synopsys, and True Circuits.

In a short video interview following the keynote, Sean O'Kane of ChipEstimate.com TV and McGuinness talked about GPU computing, the cost of wearables, and support for always-on applications.To view that video, click here.

Note: A July 2014 "Tech Talk" article at ChipEstimate.com describes Imagination Technologies' PowerVR Rogue GPUs.

Richard Goering

Related blog posts

DAC 2014 Keynote: Imagination CEO Charts New Opportunities for Semiconductors

DAC 2014: Semiconductor IP Trends Revealed at "IP Talks!"

CDNLive: Envisioning the Future of IP-Driven System Design

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.