In-Memory Computing for Low-Power Neural Network Inference

In-Memory Computing for Low-Power Neural Network Inference
by Tom Dillinger on 07-17-2020 at 10:00 am

von Neumann bottleneck

“AI is the new electricity.”, according to Andrew Ng, Professor at Stanford University.  The potential applications for machine learning classification are vast.  Yet, current ML inference techniques are limited by the high power dissipation associated with traditional architectures.  The figure below highlights the … Read More