Deep thinking on compute-in-memory in AI inference

Deep thinking on compute-in-memory in AI inference
by Don Dingee on 03-09-2023 at 6:00 am

Compute-in-memory for AI inference uses an analog matrix to instantaneously multiply an incoming data word

Neural network models are advancing rapidly and becoming more complex. Application developers using these new models need faster AI inference but typically can’t afford more power, space, or cooling. Researchers have put forth various strategies in efforts to wring out more performance from AI inference architectures,… Read More


Advantages of Large-Scale Synchronous Clocking Domains in AI Chip Designs

Advantages of Large-Scale Synchronous Clocking Domains in AI Chip Designs
by Kalar Rajendiran on 05-09-2022 at 6:00 am

Large models challenge current AI hardware solutions

We are currently in the hockey stick growth phase of AI. Advances in artificial intelligence (AI) are happening at a lightning pace. And, while the rate of adoption is exploding, so is model size. Over the past couple of years, we’ve gone from about two billion parameters to Google Brain’s recently announced trillion-parameter… Read More