I generally like to start my blogs with an application-centric viewpoint; what end-application is going to become faster, lower power or whatever because of this innovation? But sometimes an announcement defies such an easy classification because it is broadly useful. That’s the case for a recent release from Quadric, based… Read More
Tag: transformer
Inference Efficiency in Performance, Power, Area, Scalability
Support for AI at the edge has prompted a good deal of innovation in accelerators, initially in CNNs, evolving to DNNs and RNNs (convolutional neural nets, deep neural nets, and recurrent neural nets). Most recently, the transformer technology behind the craze in large language models is proving to have important relevance at… Read More
Next-Gen AI Engine for Intelligent Vision Applications
Artificial Intelligence (AI) has witnessed explosive growth in applications across various industries, ranging from autonomous vehicles and natural language processing to computer vision and robotics. The AI embedded semiconductor market is projected to reach $800 billion by year 2030. Compare this with just $48 billion… Read More
Vision Transformers Challenge Accelerator Architectures
For what seems like a long time in the fast-moving world of AI, CNNs and their relatives have driven AI engine architectures at the edge. While the nature of neural net algorithms has evolved significantly, they are all assumed to be efficiently handled on a heterogenous platform processing through the layers of a DNN: an NPU for … Read More