Learning how to learn: Neuromorphic AI inference at the edge

 

Q&A with Peter Van Der Made, BrainChip Founder and Chief Technology Officer

The semiconductor industry has long struggled to bypass Von Neumann bottlenecks, recalibrate Moore’s Law, and overcome the breakdown of Dennard Scaling.

Advanced edge AI applications are fast approaching the limits of conventional silicon and cloud-centric learning models. With enormous amounts of targeted compute power available in cloud data centers, AI training and inference models leveraging GPU and TPU hardware accelerators continue to increase in both size and sophistication.

We have seen compute power increase over the past decade as networks grow larger and more complex. In parallel, cloud-based streaming video AI solutions are demanding ever-more internet bandwidth. Clearly, these trends cannot continue without severe consequences including unmanageable latency, rapidly expanding carbon footprints, and security exploits that could potentially intercept and target raw data sent to cloud data centers.

This whitepaper discusses the evolution of neuromorphic computing, the limitations of current compute models for edge AI, and explores how neuromorphic silicon is driving a more intelligent and sustainable future.

You can view and download the whitepaper here.