WEBINAR: FPGAs for Real-Time Machine Learning Inference

WEBINAR: FPGAs for Real-Time Machine Learning Inference
by Don Dingee on 11-30-2022 at 6:00 am

An server plus an accelerator with FPGAs for real-time machine learning inference reduces costs and energy consumption up to 90 percent

With AI applications proliferating, many designers are looking for ways to reduce server footprints in data centers – and turning to FPGA-based accelerator cards for the job. In a 20-minute session, Salvador Alvarez, Sr. Manager of Product Planning at Achronix, provides insight on the potential of FPGAs for real-time machine… Read More


Webinar – FPGA Native Block Floating Point for Optimizing AI/ML Workloads

Webinar – FPGA Native Block Floating Point for Optimizing AI/ML Workloads
by Tom Simon on 02-25-2020 at 10:00 am

block float example

Block floating point (BFP) has been around for a while but is just now starting to be seen as a very useful technique for performing machine learning operations. It’s worth pointing out up front that bfloat is not the same thing. BFP combines the efficiency of fixed point operations and also offers the dynamic range of full floating… Read More