Efficient. Effective.
Everywhere.

Essential AI. Close to the sensor. Inspired
by the human brain. We’re embedding our
IP in everything, everywhere.

The Vision

BrainChip’s vision is to make AI ubiquitous through innovation that accelerates personalized artificial intelligence everywhere. Hence, Akida technology is inspired by the brain, the most efficient cognitive “processor” that we know of. It’s the result of more than 15 years of AI architecture research and development by  BrainChip co-founders Peter Van Der Made (CTO) and Anil Mankar (CDO) along with their team of neuromorphic experts.

They’ve been developing and continuously improving this technology for extremely efficient AI inference and learning. That’s the foundation for Akida products, developed at various centers of engineering excellence in Australia, USA, France, and India. Akida continues to learn from experience and evolve, autonomously, like the human brain, in pace with the industry.

At BrainChip, we deliver efficient AI performance at the Edge, thus enabling intelligent devices and applications.

Compelling
Performance

Very
Accurate

Self
Managed

Extremely
Efficient

Easy to
Deploy

To deliver these requirements, the technology builds on these foundations.

While early neuromorphic implementations were in the analog domain, BrainChip has taken a more innovative approach to the Akida architecture. Akida’s neuromorphic processing platform is event-based, fully digital, portable, and proven in silicon.

CNNs, DNNs, RNNs, Vision Transformers (ViT), and more, are integrated directly in the hardware, with minimal CPU intervention.

Our neuromorphic approach means that compute happens only as necessary, based on activations, thereby reducing the number of operations and energy consumed.

Akida intelligently sends data between neural processing engines through an integrated mesh connecting compute nodes.

Akida significantly reduces memory movement using cost-effective, scalable, standard RAMs.

Akida’s unique ability to learn and extend classes on the device instead of in the cloud translates to more security.

An intelligent DMA minimizes or eliminates the need for a CPU in AI acceleration and minimizes system load.

Compute is performed at the device level and learning is saved only as weights, protecting your sensitive data.

Runtime manages all operation of neural processing, it’s transparent to the user, and accessible through a simple API.

MetaTF tools integrated into popular frameworks like TensorFlow Keras  simplify development and tuning.

The 2nd generation of the Akida IP Platform builds on these foundations and adds numerous new capabilities.

Akida in Action

Let’s Sharpen the Edge Together

We’re pushing the limits of AI on-chip compute
to maximize efficiency, kill latency, and conserve energy.

Join us.