Overview
Energy-Efficient On-Device AI Platform to Accelerate AI Everywhere for Everyone
As demand has increased for AI-based tasks in a wide range of applications and vertical segments, on-device and edge AI processing has become more and more prevalent. As these solutions are deployed in SoCs with varying computational and power requirements, meeting the market needs for a wide range of automotive, consumer, industrial, and mobile applications can be a challenging task for both silicon IP providers and SoC companies.

Key Benefits
Low-, Mid-, and High-End AI Platforms for the Full Spectrum of Performance, Power, and Cost Points
AI platforms with extensibility, configurability, and sparse compute engine
Scalable Design to Adapt to Various AI Workloads
AI Base built on successful and power-efficient domain-specific DSP. Scalable AI Boost and AI Max can range from low (<1 terra ops per second (TOPS) to very high compute (100s of TOPS) needs
Efficient in Mapping State-of-the-Art DL/AI Workloads
Best-in-class performance for inferences per second with low latency and high throughput
End-to-End Software Toolchain for All Markets and Large Number of Frameworks
GLOW-based Xtensa® Neural Network Compiler (XNNC), interpreter, and delegate-based AI software tools
True Random Sparsity Gain
Sparse compute AI engine exploits tensor sparsity (both weights and activations)
Industry-Leading Performance and Power Efficiency
High MAC utilization and TOPS/Watt combined with low energy consumption
Target Markets
Answering the Needs of a Wide Range of End Applications
offerings