Artificial Intelligence (AI) enhances the functionality of devices used in many applications – autonomous vehicles, industrial robots, remote controls, game consoles, smartphones, and more. Machine Learning (ML) is a subset of the broader category of AI. By using ML models, which are trained by sifting through enormous amounts of historical data to discover patterns, devices can perform amazing tasks without being explicitly programmed.
ML models are created using known labeled datasets (training phase) and are subsequently used to make predictions when presented with new, unknown data in a live deployment scenario (inference). Because an enormous amount of computing resources are required both for training and inference, an ML accelerator is now vital in handling computational workloads.
For most high-volume consumer products, chips are designed with tight cost, power, and size limitations. This is the market Quadric serves with innovative semiconductor intellectual property (IP) building blocks. Our ML-optimized Chimera processors allow companies to rapidly build leading-edge SoCs and more easily write application code for those chips.
After proving the innovative Chimera architecture in 2021 by producing a test chip, Quadric introduced its first licensable IP product – the industry’s first GPNPU (general purpose Neural Processing Unit) in November of 2022, with delivery planned for Q1 2023.
Quadric has built a unified HW/SW architecture optimized for on-device artificial intelligence computing. Only the Quadric Chimera GPNPU delivers high ML inference performance and also runs complex C++ code without forcing the developer to artificially partition code between two or three different kinds of processors.
Quadric’s Chimera GPNPU is a licensable processor that scales from 1 to 16 TOPs and seamlessly intermixes scalar, vector and matrix code.
Share this post via: