Energy-Efficient On-Device AI Platform to Accelerate AI Everywhere for Everyone

The Cadence AI IP platform empowers SoC developers to design and deliver optimal solutions for a wide range of applications and markets, with IP options including Neo NPUs, Tensilica AI Base and AI Boost to meet the power, performance, and area (PPA) demands behind the NeuroWeave SDK common software platform. These deliver the scalable and energy-efficient on-device to edge AI processing that is key to today’s increasingly ubiquitous artificial intelligence and machine learning (AI/ML) SoCs, meeting the market needs for a wide range of automotive, consumer, industrial, and mobile applications.

cubes image
AI Everywhere for Everyone diagram

Scalable Design to Adapt to Various AI Workloads

Efficient in Mapping State-of-the-Art AI/ML Workloads

Best-in-class performance for inferences per second with low latency and high throughput, with architectures optimized for achieving high performance within a low-energy profile

Industry-Leading Performance and Power Efficiency

High Inferences per second per area (IPS/mm2) and per power (IPS/W)

End-to-End Software Toolchain for All Markets and a Large Number of Frameworks

NeuroWeave SDK provides a common tool for compiling networks across IP, with flexibility for performance, accuracy, and run-time environments

Answering the Needs of a Wide Range of End Applications

  • IoT
  • Hearables and wearables
  • True wireless stereo
  • Smart speakers
  • AR/VR headsets
  • Automotive
  • Mobile
  • Drones and robots
  • Intelligent cameras
  • Private on-premises compute

Browse Recommended Resources

Flexible and Configurable to Support Product Differentiation

Neo NPUs

The Neo NPUs lead the Cadence AI IP portfolio. Delivering a wide performance range and supporting up to 80 TOPS in a single core, the Neo NPUs provide accelerated AI processing for everything from energy-sensitive IoT and wearable devices to high-performance AR/VR and automotive systems. The Neo NPUs can be scaled up to 100s of TOPS through multiple core topologies.

The Neo NPUs are designed to efficiently deliver high performance for CNN, RNN, and Transformer-based networks, supporting the transition from classic to generative AI and the necessary underlying processing.

NeuroWeave SDK

The Cadence AI IP portfolio is unified behind a common software stack with the NeuroWeave SDK, which offers a consistent development environment to deploy network models on the underlying hardware IP. The NeuroWeave SDK is a flexible tool environment designed to efficiently analyze, map, and compile various types of networks and their underlying operations and achieve optimal performance.

Tensilica AI Base

The Cadence AI IP portfolio includes the extensible Tensilica DSP platform. Tensilica DSPs include flexible instruction sets designed to perform AI workload operations efficiently. Mixing AI and DSP workloads in a single application is easy with Tensilica DSPs.

Tensilica AI Boost

The Cadence Tensilica NNE 110 engine is a compact engine capable of extending the AI processing of Tensilica DSPs by adding a 32, 64, or 128 MAC offload for common network layers.

Need Help?

Training

The Training Learning Maps help you get a comprehensive visual overview of learning opportunities.
Training News - Subscribe

Browse training

Online Support

The Cadence Online Support (COS) system fields our entire library of accessible materials for self-study and step-by-step instruction.

Request Support

Technical Forums

Find community on the technical forums to discuss and elaborate on your design ideas.


Find Answers in cadence technical forums