Slide

AKIDA NEURAL
PROCESSOR IP

Personalize Your Edge AI SoC  BrainChip’s ultra-low
power licensable AI IP is ideal for cost and size
sensitive applications

BrainChip’s patented AI architecture is the result of over 15 years of fundamental R&D into event-domain processing. BrainChip’s configurable and scalable neural processor IP is ideal for ultra-low power Edge AI applications ranging from microwatts to milliwatts with a minimal memory footprint. BrainChip’s technology enables incremental, one-shot and continuous learning without requiring re-training.

Key Features:

  • Robust software and development environment and tools
  • Complete configurable neural network processor
  • On-chip mesh network interconnect
  • Standard AXI 4.0 interface for on-chip communication
  • Scalable nodes can be configured as:

– Event domain convolution neural processor
– Fully connected neural processor

  • Hardware-based event processing
  • No CPU required
  • External memory optional (SRAM or DDR)
  • Configurable amounts of embedded memory and input buffers
  • Integrated DMA and data-to-event converter
  • Hardware support for on-chip learning
  • Hardware support for 1b, 2b or 4b hybrid quantized weights and activations to reduce power and minimize memory footprint
  • Fully synthesizable RTL
  • IP deliverables package with standard EDA tools

– Complete testbench with simulation results
– RTL synthesis scripts and timing constraints
– Customized  IP package targeted for your Application

Applications:

  • Smart appliances
  • Remote controls
  • Industrial IoT
  • Security cameras
  • Sensors
  • Robots/drones
  • Automotive
  • Audio devices

Use cases:

  • Object detection
  • Sound detection
  • Object tracking
  • Sound recognition
  • Facial recognition
  • Keyword spotting
  • Gesture recognition
  • Packet inspection

BrainChip’s AI IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP allows incremental learning and high-speed inferencing in a wide variety of use cases with high throughput and unsurpassed performance-per-watt.

BrainChip’s IP can be configured to perform convolutions (CNP) and fully connect (FNP) layers. Weight bit-precision is programmable to optimize throughput or accuracy and each weight is stored locally in embedded SRAM inside each NPU. The entire neural networks can be placed into the fabric, removing the need to swap weights in and out of DRAM resulting in a reduction of power consumption while increasing throughput.

BrainChip’s IP fabric can be placed either in a parallelized manner that would be ideal for ultimate performance, or space-optimized in order to reduce silicon utilization and further reduce power consumption. Additionally, users can modify clock frequency to optimize performance and power consumption further.

LEARN MORE
BEGIN DESIGNING NOW

BE AMONG THE FIRST TO RECEIVE NEWS, EVENTS, AND UPDATES

SIGN UP NOW!