BrainChip has developed an ultra-low power, event
domain neural processor capable of continuous
learning and inference supporting
many of today’s standard neural network solutions


BrainChip has solved the problems inherent in moving AI out of the data center and to the location where data is created: the Edge. Its ultra-low power, flexible, self-contained, event-based neural processor is capable of inferencing and learning to support today’s most common neural networks.

BrainChip’s AI neural processor is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip’s neural processor allows incremental learning and high-speed inferencing in a wide variety of use cases with high throughput and unsurpassed performance-per-watt.


Overcoming current technology barriers

AI needs to evolve. The next generation of AI requires intelligence to take place at the edge – because that’s where data is generated or sensed, and where information is needed. BrainChip believes this demands ultra-low power consumption, and no dependency on network connections to powerful remote compute infrastructures. Additionally, data generation at the edge needs a stable platform for continuous and autonomous learning that adapts to the local environment. This is what the Akida platform delivers.


The BrainChip Akida Neural Processor

BrainChip has dedicated the past 15 years to studying AI and gained a comprehensive understanding of the capabilities and limitations of conventional AI processing.

The Akida neural processor is a complete, purpose-built solution that solves the inherent problems of today’s technology in addressing edge AI: limited power budget, limited processor performance, limited memory, limited scalability, limited or no connectivity, and limited cost-effectiveness while enabling true intelligence at the Edge through continuous learning.

By leveraging our knowledge of artificial intelligence and human brain function, the Akida event-domain processor offers efficiency, ultra-low power consumption, and continuous learning.   It requires only internal memory but can utilize external memory if needed and can perform inference and incremental learning with no host processor support. It represents the third generation of neural network processing and the next step in the evolution of AI: a flexible, scalable event-based processor solution for Edge AI applications that can revolutionize human-to-machine interaction and IoT as we know it.


Akida vs. first and second generation neural processors

Traditional solutions utilize a CPU to run the neural network algorithm, a deep learning accelerator (such as a GPU) to perform multiply and accumulate mathematical operations, and memory to store network parameters. What makes Akida technology so different from first- and second-generation neural processors?  By integrating all the required elements into a consolidated, small-footprint, purpose-built neural processor, the Akida processor eliminates unnecessary compute and data I/O overhead and eliminates the excess power consumption caused by interaction and communication between separate elements.

Unlike legacy processors, the Akida processor is event-based. “Events” indicate the presence of useful information. Conventional technologies process all information without discerning if it is useful, wasting effort and resources.  In addition, as an event-based processor, it is capable of learning without re-training.


Incremental Learning

With the Akida platform, the next generation of AI devices at the edge can continuously learn and dynamically re-learn. Previous-generation AI solutions have a significant drawback: once trained, it’s not easy for the system to learn new things without going through the entire training process again. BrainChip’s proprietary algorithm and scalable neural fabric eliminates the need for data roundtrips to centralized CPUs for retraining. This ability enables personalization of Edge AI devices.


Ultra-Low Power Consumption

The Akida neural processor analyzes data such as images, sounds, data patterns, and other sensor data and extracts useful information generated from events in the data. Events tend to be sparse, and as in the human brain, this sparsity contributes to significant power savings by eliminating wasted effort executing computations on data with no value. In combination with state-of-the-art circuit architecture and implementation, the Akida neural processor reduces power consumption by up to 10x over the most power-efficient alternatives on the market and up to 1,000x compared with standard data center architectures. For AI applications at the edge, where information is created, power budgets can be limited to micro-watts or milli-watts.



The Akida neural processor is built as an array of fundamental building blocks called Nodes.  Each Node contains four Neural Processing Units which contain all of the necessary elements to perform event based processing, such as convolution.  Nodes are arrayed and interconnected through a mesh network to build a scalable neural processing solution which is sized based upon application needs.  In addition to scalability  by parallelizing nodes, one can scale a solution by utilizing a smaller group of nodes in a recirculating fashion to process a neural network.


Akida Software Development Environment

Once developed, quantized and trained, networks are run on the Akida event domain processor emulator for full performance and evaluation.  The ADE leverages Python alongside all associated tools and libraries – such as NumPy. The ADE is comprised of the Akida Execution Engine (including neural processor simulator), CNN conversion flow and a “model zoo” to integrate customers’ pre-created Neural Network models.