WEBINAR: FPGAs for Real-Time Machine Learning Inference

WEBINAR: FPGAs for Real-Time Machine Learning Inference
by Don Dingee on 11-30-2022 at 6:00 am

An server plus an accelerator with FPGAs for real-time machine learning inference reduces costs and energy consumption up to 90 percent

With AI applications proliferating, many designers are looking for ways to reduce server footprints in data centers – and turning to FPGA-based accelerator cards for the job. In a 20-minute session, Salvador Alvarez, Sr. Manager of Product Planning at Achronix, provides insight on the potential of FPGAs for real-time machine… Read More


High-Performance Natural Language Processing (NLP) in Constrained Embedded Systems

High-Performance Natural Language Processing (NLP) in Constrained Embedded Systems
by Kalar Rajendiran on 11-30-2021 at 6:00 am

Demonstrator Block Diagram

Current technology news is filled with talk of many edge applications moving processing from the cloud to the edge. One of the presentations at the recently concluded Linley Group Fall Processor Conference was about AI moving from the cloud to the edge. Rightly so, there were several sessions dedicated to discussing AI and edge… Read More