Efficiency – Flex Logix’s Update on InferX™ X1 Edge Inference Co-Processor

Efficiency – Flex Logix’s Update on InferX™ X1 Edge Inference Co-Processor
by Randy Smith on 10-30-2019 at 10:00 am

Last week I attended the Linley Fall Processor Conference held in Santa Clara, CA. This blog is the first of three blogs I will be writing based on things I saw and heard at the event.

In April, Flex Logix announced its InferX X1 edge inference co-processor. At that time, Flex Logix announced that the IP would be available and that a chip,… Read More


AI Inference at the Edge – Architecture and Design

AI Inference at the Edge – Architecture and Design
by Tom Dillinger on 09-23-2019 at 10:00 am

In the old days, product architects would throw a functional block diagram “over the wall” to the design team, who would plan the physical implementation, analyze the timing of estimated critical paths, and forecast the signal switching activity on representative benchmarks.  A common reply back to the architects was, “We’veRead More


Highly Modular, AI Specialized, DNA 100 IP Core Target IoT to ADAS

Highly Modular, AI Specialized, DNA 100 IP Core Target IoT to ADAS
by Eric Esteve on 09-24-2018 at 7:00 am

The Cadence Tensilica DNA100 DSP IP core is not a one-size-fits-all device. But it’s highly modular in order to support AI processing at the edge, delivering from 0.5 TMAC for on-device IoT up to 10s or 100 TMACs to support autonomous vehicle (ADAS). If you remember the first talks about IoT and Cloud, a couple of years ago, the IoT … Read More