Flex Logix InferX X1 Optimizes Edge Inference at Linley Processor Conference

Flex Logix InferX X1 Optimizes Edge Inference at Linley Processor Conference
by Camille Kokozaki on 04-18-2019 at 12:00 pm

Dr. Cheng Wang, Co-Founder and SVP Engineering at Flex Logix, presented the second talk in the ‘AI at the Edge’ session, at the just concluded Linley Spring Processor Conference, highlighting the InferX X1 Inference Co-Processor’s high throughout, low cost, and low power. He opened by pointing out that existing inference solutions… Read More


Real Time Object Recognition for Automotive Applications

Real Time Object Recognition for Automotive Applications
by Tom Simon on 04-12-2019 at 7:00 am

The basic principles used for neural networks have been understood for decades, what have changed to make them so successful in recent years are increased processing power, storage and training data. Layered on top of this is continued improvement in algorithms, often enabled by dramatic hardware performance improvements.… Read More


AI at the Edge

AI at the Edge
by Tom Dillinger on 12-20-2018 at 7:00 am

Frequent Semiwiki readers are well aware of the industry momentum behind machine learning applications. New opportunities are emerging at a rapid pace. High-level programming language semantics and compilers to capture and simulate neural network models have been developed to enhance developer productivity (link). Researchers… Read More


Architecture for Machine Learning Applications at the Edge

Architecture for Machine Learning Applications at the Edge
by Tom Dillinger on 10-31-2018 at 2:01 pm

Machine learning applications in data centers (or “the cloud”) have pervasively changed our environment. Advances in speech recognition and natural language understanding have enabled personal assistants to augment our daily lifestyle. Image classification and object recognition techniques enrich our social media experience,… Read More


Neural Network Efficiency with Embedded FPGA’s

Neural Network Efficiency with Embedded FPGA’s
by Tom Dillinger on 09-21-2018 at 12:00 pm

The traditional metrics for evaluating IP are performance, power, and area, commonly abbreviated as PPA. Viewed independently, PPA measures can be difficult to assess. As an example, design constraints that are purely based on performance, without concern for the associated power dissipation and circuit area, are increasingly… Read More


Machine Learning and Embedded FPGA IP

Machine Learning and Embedded FPGA IP
by Tom Dillinger on 07-18-2018 at 12:00 pm

Machine learning-based applications have become prevalent across consumer, medical, and automotive markets. Still, the underlying architecture(s) and implementations are evolving rapidly, to best fit the throughput, latency, and power efficiency requirements of an ever increasing application space. Although ML is … Read More


The Best of IP at DAC 2018 Conference

The Best of IP at DAC 2018 Conference
by Eric Esteve on 06-15-2018 at 12:00 pm

Design IP is going well, with 12% YoY growth in 2017, even if the market is about $3.5B. But Design IP is serving a $400B semiconductor market. Can you imagine the future of the semi market if the chip makers couldn’t have access to Design IP? The same is true for EDA: it’s a niche market (CAE revenues was about $3B and IC Physical Design… Read More


Block RAM integration for an Embedded FPGA

Block RAM integration for an Embedded FPGA
by Tom Dillinger on 05-22-2018 at 12:00 pm

The upcoming Design Automation Conference in San Francisco includes a very interesting session –“Has the Time for Embedded FPGA Come at Last?” Periodically, I’ve been having coffee with the team at Flex Logix, to get their perspective on this very question – specifically, to learn about the key features that customers are seeking… Read More