INTERVIEW: Bluespec RISC-V soft cores in Achronix FPGAs

INTERVIEW: Bluespec RISC-V soft cores in Achronix FPGAs
by Don Dingee on 06-13-2024 at 6:00 am

Achronix Bluespec partnership highlights

Recently, a partnership between Achronix and Bluespec has been in the news. Bluespec RISC-V processors are available as soft cores in a Speedster®7t FPGA on Achronix’s VectorPath® PCIe development card or in a standalone Speedster7t FPGA. We spoke with executives from Achronix and Bluespec about the impetus for this effort … Read More


MIPI D-PHY IP brings images on-chip for AI inference

MIPI D-PHY IP brings images on-chip for AI inference
by Don Dingee on 03-13-2023 at 10:00 am

Perceive Ergo 2 brings images on-chip for AI inference with Mixel MIPI D-PHY IP

Edge AI inference is getting more and more attention as demand grows for AI processing across an increasing number of diverse applications, including those requiring low-power chips in a wide range of consumer and enterprise-class devices. Much of the focus has been on optimizing the neural network processing engine for these… Read More


Deep thinking on compute-in-memory in AI inference

Deep thinking on compute-in-memory in AI inference
by Don Dingee on 03-09-2023 at 6:00 am

Compute-in-memory for AI inference uses an analog matrix to instantaneously multiply an incoming data word

Neural network models are advancing rapidly and becoming more complex. Application developers using these new models need faster AI inference but typically can’t afford more power, space, or cooling. Researchers have put forth various strategies in efforts to wring out more performance from AI inference architectures,… Read More


Area-optimized AI inference for cost-sensitive applications

Area-optimized AI inference for cost-sensitive applications
by Don Dingee on 02-15-2023 at 6:00 am

Expedera uses packet-centric scalability to move up and down in AI inference performance while maintaining efficiency

Often, AI inference brings to mind more complex applications hungry for more processing power. At the other end of the spectrum, applications like home appliances and doorbell cameras can offer limited AI-enabled features but must be narrowly scoped to keep costs to a minimum. New area-optimized AI inference technology from… Read More


Semifore is Supplying Pain Relief for Some World-Changing Applications

Semifore is Supplying Pain Relief for Some World-Changing Applications
by Mike Gianfagna on 09-23-2022 at 8:00 am

Semifore is Supplying Pain Relief for Some World Changing Applications

In a recent post, I discussed how Samtec is fueling the AI revolution. In that post, I talked about how smart everything seems to be everywhere, changing the way we work, the way we think about our health and ultimately improving life on the planet. These are lofty statements, but the evidence is growing that the newest wave of applications… Read More


Ultra-efficient heterogeneous SoCs for Level 5 self-driving

Ultra-efficient heterogeneous SoCs for Level 5 self-driving
by Don Dingee on 09-14-2022 at 6:00 am

Ultra-efficient heterogeneous SoCs target the AI processing pipeline for Level 5 self-driving

The latest advanced driver-assistance systems (ADAS) like Mercedes’ Drive Pilot and Tesla’s FSD perform SAE Level 3 self-driving, with the driver ready to take back control if the vehicle calls for it. Reaching Level 5 – full, unconditional autonomy – means facing a new class of challenges unsolvable with existing technology… Read More


A clear VectorPath when AI inference models are uncertain

A clear VectorPath when AI inference models are uncertain
by Don Dingee on 08-22-2022 at 10:00 am

Achronix VectorPath Accelerator Card with Speedster 7t1500 FPGA for running AI inference models and more

The chase to add artificial intelligence (AI) into many complex applications is surfacing a new trend. There’s a sense these applications need a lot of AI inference operations, but very few architects can say precisely what those operations will do. Self-driving may be the best example, where improved AI model research and discovery… Read More


EasyVision: A turnkey vision solution with AI built-in

EasyVision: A turnkey vision solution with AI built-in
by Don Dingee on 07-21-2022 at 6:00 am

People counting with EasyVision, a turnkey vision solution from Flex Logix

Artificial intelligence (AI) is reserved for companies with hordes of data scientists, right? There’s plenty of big problems where heavy-duty AI fits. There’s also a space of smaller, well-explored problems where lighter AI can deliver rapid results. Flex Logix is taking that idea a step further, packaging their InferX X1 edge… Read More


Flex Logix Closes $55M in Series D Financing and Accelerates AI Inference and eFPGA Adoption

Flex Logix Closes $55M in Series D Financing and Accelerates AI Inference and eFPGA Adoption
by Mike Gianfagna on 03-25-2021 at 10:00 am

Flex Logix Closes 55M in Series D Financing and Accelerates AI Inference and eFPGA Adoption

Flex Logix is a unique company. It is one of the few that supplies both FPGA and embedded FPGA technology based on a proprietary programmable interconnect that uses half the transistors and half the metal layers of traditional FPGA interconnect. Their architecture provides some rather significant advantages. I wrote about their… Read More


Flex Logix Brings AI to the Masses with InferX X1

Flex Logix Brings AI to the Masses with InferX X1
by Mike Gianfagna on 10-22-2020 at 10:00 am

InferX X1 PCIe board

In April, I covered a new AI inference chip from Flex Logix. Called InferX X1, this part had some very promising performance metrics. Rather than the data center, the chip focused on accelerating AI inference at the edge, where power and form factor are key metrics for success. The initial information on the chip was presented at … Read More