The advanced driver assistance system (ADAS) segment—essential for enhancing the driver experience and overall safety—is one of the fastest growing segments of the automotive semiconductor space. The amount of electronics is growing fast with the level of the vehicle automation. As a consequence, autonomous driving is leading to major disruptions in the automotive industry because ADAS applications require a performance level that goes far beyond popular microcontrollers. Therefore a new class of high-performance systems-on-chip (SoCs) is needed to process all sensor data and fuse them together.
In addition, high-definition digital maps and cloud-based services are leveraged to provide additional precise and partially redundant information to further enhance the driver’s awareness to safely control the car in real time in all situations. Hence ADAS SoCs enable vehicles to become “aware” of their surroundings—but at a cost in terms of chip area, power consumption, and performance.
High-performance ADAS SoC requirements include:
- Machine learning: Dedicated optimized and fully programable neural network processor cores
- High compute performance: 1TMAC/sec in < 1mm2 to support a digital signal processing architecture tuned to process compute-intensive algorithms, delivering an optimal SoC performance, power, and area ratio
- High network bandwidth: 1Gbit/s or more to support a low-latency transmission of high video/image resolution or control data
- High memory bandwidth: >3Gbit/s data-rate interfaces and sufficient memory space required to store and access intermediate results generated by highly complex algorithms
- Low power consumption: <9W power consumption for the ADAS application
- Safety: Safety architecture including documentation to support ISO 26262-compliant system development
Imaging and Computer Vision Processing
In order to make cars safer and more comfortable, ADAS applications are becoming more and more popular. The huge amount of data generated by these systems—up to 1Gbyte/s (4TByte/day)—requires very powerful data-processing platforms with an AI performance of up to 30TMAC/s.
With optimizable Cadence® Tensilica® DSPs and the associated software partner ecosystem, applications for computer vision, imaging, neural networks, lidar, radar, ultrasound, and V2X can be efficiently implemented, saving silicon area and significantly reducing the power consumption compared to other solutions.
Tensilica DSPs can help to efficiently offload the host CPU and accelerate the sensor data processing to significantly reduce the power consumption. Imaging and vision algorithms can run on a DSP that’s specifically optimized for the imaging and vision functions. Regardless of the ADAS application, the Tensilica DSP can be leveraged in the sensor itself, within the ADAS ECU, or in the central sensor fusion platform.
However, regardless of the sensor type, a machine needs to efficiently analyze the data and reliably recognize objects. Since many of these systems are camera based, the video data must be processed and represented in the most meaningful way via graphic displays to the driver.
Recently neural networks have become very popular for this task, enabling high object-recognition rates of more than 99%. However current solutions based on CPUs or GPUs consume too much power and therefore cannot be used in production cars.
Two key things must be provided for the efficient deployment of neural networks:
- A scalable, low-power multi-core hardware platform that is fully programmable
- A development software flow that automatically optimizes and maps neural networks on the target platform
The Tensilica Vision DSP family offers three Vision products:
- The Vision P5 DSP, introduced in 2015, offers up to 4X-100X the performance relative to traditional mobile CPU+GPU systems at a fraction of the power/energy.
- The Vision P6 DSP, introduced in 2016, set a new standard in neural network performance for a general-purpose imaging and computer vision DSP by offering 4X the peak performance compared to the Vision P5 DSP.
- The Vision C5 DSP, introduced in 2017, is the industry’s first standalone, self-contained neural network DSP IP core with 1TMAC/sec computational capacity to run all computational tasks. It is architected for multi-core designs, enabling a multi-TMAC solution in a small footprint.
Xtensa Neural Network Compiler
The Vision C5 DSP and the Vision P6 DSP also come with the Tensilica Xtensa® Neural Network Compiler (XNNC), which can map any neural network trained with tools, such as Caffe or Tensor Flow, into executable and highly optimized code for the Vision C5 and P6 DSPs, leveraging a comprehensive set of hand-optimized neural network library functions.
OpenCV and OpenVX Library Support
The Vision P5 and P6 DSPs come with over 1000 OpenCV-like functions. These functions are highly optimized to achieve the best performance on these DSPs.
The Vision P5 and P6 DSPs are the first imaging/vision DSPs to pass The Khronos Group’s conformance tests for the OpenVX 1.1 specification. In addition, a dedicated, highly optimized function library (XICNN) and a DMA manager library (libidma) are provided as well.
The Vision C5 DSP, together with the Xtensa Neural Network Compiler and comprehensive library support, provide an efficient solution for the development of neural network applications.
ADAS Reference Platform Evaluation Kit
An ADAS Evaluation Kit developed by Dream Chip provides a full development environment for sophisticated video and signal processing algorithms based on an ADAS Reference Platform for heterogeneous multi-core, real-time processing. The ADAS evaluation kit is comprised of a ADAS SoC, system-on-module (SoM), quad-HDMI reference board, board support package (BSP), and an ADAS software development kit (SDK).
The high-performance, low-power ADAS SoC includes a quad-core Cadence Tensilica Vision P6 DSP cluster, quad-core CPU subsystem, a safety subsystem, and Cadence IP for 1G Automotive Ethernet MAC, LPDDR4, and others.
The SoC provides all necessary components to power, boot, debug, and process ADAS applications in real time. Custom signal processing and video processing algorithms can be easily evaluated for performance, throughput, and power consumption on the ADAS reference platform. Also, the evaluation kit can be extended to support customer-specific applications requiring different video interface standards or even non-video sensor sources.
- Introduction to ADAS with a Real-Life Example
- Protium S1 used to prototype a pedestrian detection application.
- AI for Image Classification and Object Detection
- Full HD 360° Surround View enabled by Tensilica Vision P6 DSP
- AI for People Detection using Tensilica Vision P6 DSP
- Automotive Sensors: Concepts and Trends
- Breaking Down ADAS Sensor Fusion Platforms and Sensor Concepts
- Renesas: Balancing Performance, Low-Power and Functional Safety in ADAS Applications
- Pedestrian Detection: Cadence Tensilica IVP DSP on Cadence Protium FPGA-based prototyping platform
Product Overview (1)