BrainChip Brings AI to the Edge and Beyond

via Gestalt IT

Until now, Artificial Intelligence processing has been a centralized function. It featured massive systems with thousands of processors working in parallel. But researchers have discovered that lower-precision operations work just as well for popular applications like speech and image processing. This opens the door to a new generation of cheap and low-power machine learning chips from companies like BrainChip.

AI is Moving Out of the Core

Machine learning is incredibly challenging, with massive data sets and power-hungry processors. Once a model is trained, it still requires some serious horsepower to churn through the real-time data. Although many devices can perform inferencing, most use dedicated GPU or neural net processing engines. These typically draw considerable power since they’re large and complex, which is why most machine learning (ML) processing started in massive centralized computer systems.

One reason for this complexity is that machine learning processing chips typically use multiple parallel pipelines for data. For example, Nvidia’s popular Tesla chips have hundreds or thousands of cores, enabling massive parallelism and performance. Many of these chips are designed as graphics processors (GPUs), so include components and optimization that goes unused in ML processing. These cards typically draw quite a lot of power and require dedicated cooling and support infrastructure.

Recently…

READ MORE