Characteristics of an Efficient Inference Processor

Characteristics of an Efficient Inference Processor
by Tom Dillinger on 12-11-2019 at 10:00 am

The market opportunities for machine learning hardware are becoming more succinct, with the following (rather broad) categories emerging:

  1. Model training:  models are evaluated at the “hyperscale” data center;  utilizing either general purpose processors or specialized hardware, with typical numeric precision of 32-bit
Read More

AI Inference at the Edge – Architecture and Design

AI Inference at the Edge – Architecture and Design
by Tom Dillinger on 09-23-2019 at 10:00 am

In the old days, product architects would throw a functional block diagram “over the wall” to the design team, who would plan the physical implementation, analyze the timing of estimated critical paths, and forecast the signal switching activity on representative benchmarks.  A common reply back to the architects was, “We’veRead More


Flex Logix InferX X1 Optimizes Edge Inference at Linley Processor Conference

Flex Logix InferX X1 Optimizes Edge Inference at Linley Processor Conference
by Camille Kokozaki on 04-18-2019 at 12:00 pm

Dr. Cheng Wang, Co-Founder and SVP Engineering at Flex Logix, presented the second talk in the ‘AI at the Edge’ session, at the just concluded Linley Spring Processor Conference, highlighting the InferX X1 Inference Co-Processor’s high throughout, low cost, and low power. He opened by pointing out that existing inference solutions… Read More