Energy-Efficient On-Device AI Platform to Accelerate AI Everywhere for Everyone

The Cadence AI IP platform empowers SoC developers to design and deliver optimal solutions for a wide range of applications and markets, with IP options including Neo NPUs, NeuroEdge AI Co-processor, and Tensilica DSPs to meet the power, performance, and area (PPA) demands behind the NeuroWeave SDK common software platform. These deliver the scalable and energy-efficient on-device to edge AI processing that is key to today’s increasingly ubiquitous artificial intelligence and machine learning (AI/ML) SoCs, meeting the market needs for a wide range of automotive, consumer, industrial, and mobile applications.

cubes image
AI Everywhere for Everyone diagram

Scalable Design to Adapt to Various AI Workloads

Efficient in Mapping State-of-the-Art AI/ML Workloads

Best-in-class performance for inferences per second with low latency and high throughput, with architectures optimized for achieving high performance within a low-energy profile

Industry-Leading Performance and Power Efficiency

High Inferences per second per area (IPS/mm2) and per power (IPS/W)

End-to-End Software Toolchain for All Markets and a Large Number of Frameworks

NeuroWeave SDK provides a common tool for compiling networks across IP, with flexibility for performance, accuracy, and run-time environments

Answering the Needs of a Wide Range of End Applications

  • IoT
  • Hearables and wearables
  • True wireless stereo
  • Smart speakers
  • AR/VR headsets
  • Automotive
  • Mobile
  • Drones and robots
  • Intelligent cameras
  • Private on-premises compute

Browse Recommended Resources

Flexible and Configurable to Support Product Differentiation

Neo NPUs

The Neo NPU is a cutting-edge neural processing unit designed to revolutionize AI and machine learning applications. This innovative IP is designed to seamlessly integrate AI into next-generation silicon projects, offering unparalleled scalability and power efficiency that is easily connected to a compute subsystem via an AXI interface. The Neo NPU accelerates complex ML computations, enabling results from a wide variety of network types, from CNNs to LLMs. Ideal for a wide range of industries and end markets, the Neo NPU enables everything from low-power audio applications running fractions of TOPs to high-performance imaging systems requiring hundreds of TOPs or more. By leveraging the NEO NPU, architects can focus on their areas of expertise and easily add AI into their systems. Experience the future of AI acceleration with the Neo NPU.

NeuroEdge AI Co-Processor

The NeuroEdge AI Co-Processor (AICP) is a new class of processor designed to be paired with any NPU to create a robust AI subsystem. The NeuroEdge AICP executes layers and operations that are not suited for the NPU, e.g., sigmoid, tanh, relu, eltwise, and non-linear operations to name a few, or operations/layers that require proprietary implementations. Furthermore, the NeuroEdge AICP offers a wide range of configuration options, striking the perfect balance between area and performance.

Tensilica DSPs

The Cadence AI IP portfolio includes the extensible Tensilica DSP platform. Tensilica DSPs include flexible instruction sets designed to perform AI workload operations efficiently. Mixing AI and DSP workloads in a single application is easy with Tensilica DSPs.

NeuroWeave SDK

The NeuroWeave SDK is central to the Cadence AI IP portfolio. This powerful SDK offers a suite of tools, libraries, and drivers that efficiently map ML networks to all Cadence AI IP offerings and deploy them when ready. Users can develop AI networks using popular frameworks such as PyTorch, TensorFlow, TensorFlow Light Micro (TFLM), ONNX, and more. The NeuroWeave SDK quantizes and maps these networks to the targeted Cadence AI IP, allowing users to choose the optimal IP for performance, power, and area, all within a single, familiar SDK. When requirements change, the NeuroWeave SDK provides a straightforward path to explore other AI IP configurations, ensuring adaptability and ease of use. Unlock the full potential of AI integration with the NeuroWeave SDK.

Protocol IP for AI

Cadence Protocol IP for AI, including PCI Express® (PCIe®), UALink, CXL, Universal Chiplet Interconnect Express (UCIe™), and advanced memory interfaces such as HBM and GDDR, is engineered to optimize AI applications across markets. These technologies provide high-performance, low-latency interconnects for efficient data transfer and seamless communication, supporting various data speeds and configurations. Our AI systems handle intensive computational tasks and large datasets with enhanced performance, power efficiency, and scalability.

Need Help?

Training

The Training Learning Maps help you get a comprehensive visual overview of learning opportunities.
Training News - Subscribe

Browse training

Online Support

The Cadence Online Support (COS) system fields our entire library of accessible materials for self-study and step-by-step instruction.

Request Support

Technical Forums

Find community on the technical forums to discuss and elaborate on your design ideas.


Find Answers in cadence technical forums