Semifore is Supplying Pain Relief for Some World-Changing Applications

Semifore is Supplying Pain Relief for Some World-Changing Applications
by Mike Gianfagna on 09-23-2022 at 8:00 am

Semifore is Supplying Pain Relief for Some World Changing Applications

In a recent post, I discussed how Samtec is fueling the AI revolution. In that post, I talked about how smart everything seems to be everywhere, changing the way we work, the way we think about our health and ultimately improving life on the planet. These are lofty statements, but the evidence is growing that the newest wave of applications… Read More


Ultra-efficient heterogeneous SoCs for Level 5 self-driving

Ultra-efficient heterogeneous SoCs for Level 5 self-driving
by Don Dingee on 09-14-2022 at 6:00 am

Ultra-efficient heterogeneous SoCs target the AI processing pipeline for Level 5 self-driving

The latest advanced driver-assistance systems (ADAS) like Mercedes’ Drive Pilot and Tesla’s FSD perform SAE Level 3 self-driving, with the driver ready to take back control if the vehicle calls for it. Reaching Level 5 – full, unconditional autonomy – means facing a new class of challenges unsolvable with existing technology… Read More


A clear VectorPath when AI inference models are uncertain

A clear VectorPath when AI inference models are uncertain
by Don Dingee on 08-22-2022 at 10:00 am

Achronix VectorPath Accelerator Card with Speedster 7t1500 FPGA for running AI inference models and more

The chase to add artificial intelligence (AI) into many complex applications is surfacing a new trend. There’s a sense these applications need a lot of AI inference operations, but very few architects can say precisely what those operations will do. Self-driving may be the best example, where improved AI model research and discovery… Read More


EasyVision: A turnkey vision solution with AI built-in

EasyVision: A turnkey vision solution with AI built-in
by Don Dingee on 07-21-2022 at 6:00 am

People counting with EasyVision, a turnkey vision solution from Flex Logix

Artificial intelligence (AI) is reserved for companies with hordes of data scientists, right? There’s plenty of big problems where heavy-duty AI fits. There’s also a space of smaller, well-explored problems where lighter AI can deliver rapid results. Flex Logix is taking that idea a step further, packaging their InferX X1 edge… Read More


Flex Logix Closes $55M in Series D Financing and Accelerates AI Inference and eFPGA Adoption

Flex Logix Closes $55M in Series D Financing and Accelerates AI Inference and eFPGA Adoption
by Mike Gianfagna on 03-25-2021 at 10:00 am

Flex Logix Closes 55M in Series D Financing and Accelerates AI Inference and eFPGA Adoption

Flex Logix is a unique company. It is one of the few that supplies both FPGA and embedded FPGA technology based on a proprietary programmable interconnect that uses half the transistors and half the metal layers of traditional FPGA interconnect. Their architecture provides some rather significant advantages. I wrote about their… Read More


Flex Logix Brings AI to the Masses with InferX X1

Flex Logix Brings AI to the Masses with InferX X1
by Mike Gianfagna on 10-22-2020 at 10:00 am

InferX X1 PCIe board

In April, I covered a new AI inference chip from Flex Logix. Called InferX X1, this part had some very promising performance metrics. Rather than the data center, the chip focused on accelerating AI inference at the edge, where power and form factor are key metrics for success. The initial information on the chip was presented at … Read More


Efficiency – Flex Logix’s Update on InferX™ X1 Edge Inference Co-Processor

Efficiency – Flex Logix’s Update on InferX™ X1 Edge Inference Co-Processor
by Randy Smith on 10-30-2019 at 10:00 am

Last week I attended the Linley Fall Processor Conference held in Santa Clara, CA. This blog is the first of three blogs I will be writing based on things I saw and heard at the event.

In April, Flex Logix announced its InferX X1 edge inference co-processor. At that time, Flex Logix announced that the IP would be available and that a chip,… Read More


AI Inference at the Edge – Architecture and Design

AI Inference at the Edge – Architecture and Design
by Tom Dillinger on 09-23-2019 at 10:00 am

In the old days, product architects would throw a functional block diagram “over the wall” to the design team, who would plan the physical implementation, analyze the timing of estimated critical paths, and forecast the signal switching activity on representative benchmarks.  A common reply back to the architects was, “We’veRead More


Highly Modular, AI Specialized, DNA 100 IP Core Target IoT to ADAS

Highly Modular, AI Specialized, DNA 100 IP Core Target IoT to ADAS
by Eric Esteve on 09-24-2018 at 7:00 am

The Cadence Tensilica DNA100 DSP IP core is not a one-size-fits-all device. But it’s highly modular in order to support AI processing at the edge, delivering from 0.5 TMAC for on-device IoT up to 10s or 100 TMACs to support autonomous vehicle (ADAS). If you remember the first talks about IoT and Cloud, a couple of years ago, the IoT … Read More