Home > Community > Blogs > The Fuller View > google driverless car s sensor vision and computing future
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of The Fuller View blog (individual posts).


* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Google Driverless Car's Sensor, Vision, and Computing Future

Comments(1)Filed under: Google, MEMS, image sensor, embedded systems design, embedded vision, Embedded Vision Alliance, autonomous cars, Nathaniel Fairfield

SANTA CLARA, Calif.--Holistic system design coupled with gradual design evolution and sensor fusion will put autonomous vehicles in every driveway. Some day. At least that was my takeaway from a recent presentation by Nathaniel Fairfield, the technical lead on Google autonomous vehicle program.

Fairfield delivered a keynote here to a packed room of Embedded Vision Summit attendees, but he was tantalizingly light on technical specifics when it came to sensor architectures and design roadmaps. Still, he gave glimpses of impressive design engineering and made a solid case for the robot cars that swivel heads on highways. He cited potential improvements in: 

  • Safety (road accidents are leading causes of death in the United States between the ages of 4-35). "Of all the other causes, this one is preventable."
  • Efficiency (studies have showed optimal highway traffic flow is one car for every 66 feet. "That's sparse. We can pack more cars in and that way you don't have to build whole new freeways.")
  • Environment (more people are killed by pollution than cars themselves, he argued).

While this technology is undoubtedly amazing already, Google is aiming quite high.

"Our mission is to transform mobility, not make slightly better cruise control," Fairfield said.

The challenge is to design a human-like solution that isn't human and to do so in a noisy chaotic real-world environment at an affordable cost with reasonable compute requirements. No problem, right?

For the time being, both the first Prius- and Lexus-based generations of Google autonomous vehicles and the more recent fully autonomous version use a multi-sensor, multi-modal technology to navigate the world, from radar to sensors, lasers and Google maps. 

Tech challenges

Making sense of that data is nontrivial in real time. For example, meshing data from all those cameras (he wouldn't tell us how many) and sensors together should require an enormous computational capability, but that turns out not to be such an issue. In the Lexus case, "they use a standard equivalent of a desktop computer," Fairfield said. "It doesn't need a lot because it's on the freeway and there's a lot of structure (via mapping) we can exploit."

Atop the vehicles sit Velodyne Lidar systems to complement other sensors and cameras. Can those be replaced with, say, a number of additional (and less costly) cameras? 

Although Google is looking into reducing sensor costs, replacing the Lidar is not realistic, Fairfield said: 

"Cameras are fantastic for some purposes but the laser is great--for example, at night. It just sees in the dark. Otherwise you'd have to surround the car with spotlights. I don't see one (type) of sensor replacing another. I see us getting better at integrating them and building that combined system. I really do think they're all important."

One attendee probed Fairfield about the sensor architecture to understand where the filtering and processing is managed. He kept it close to the vest: 

"Because we are computationally limited, the sort of theoretically awesome, dump-it-into-one-massive filter that sorts it all out, that's not really tractable. We build simpler systems to combine the information and then infuse it at a higher level."

Asked whether the roadmap (pardon the pun) will reduce reliance on maps, Fairfield replied, "I'd love to. Longer term that's a good direction (and) it can be a gradual process. But we have no plans to do it (soon). "

Another attendee asked about how to mitigate interference among lasers. Fairfield said:

"The radars--off-the-shelf radars-are designed not to interfere with each other. With the lasers, we have detected interference...it was still less than interference you get from the sun and other sparkly stuff you get in the world."  

If, like me, you're dying to buy one of these driverless cars to ease your mind during lousy commutes and long hauls, you're just going to have to be patient. In the meantime, we can satisfy ourselves by watching a really amazing design evolution happen before our eyes. 

Brian Fuller

Related stories:

-- Embedded Vision's Transformative Potential 


Leave a Comment

E-mail (will not be published)
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.