WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 59
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 59
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)
            
Achronix Logo SemiWiki
WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 59
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 59
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)

Webinar on Dealing with the Pain Points of AI/ML Hardware

Webinar on Dealing with the Pain Points of AI/ML Hardware
by Tom Simon on 12-07-2021 at 6:00 am

Ever increasing data handling demands make creating hardware for many applications extremely difficult. In an upcoming webinar Achronix, a leading supplier of FPGA’s, talks about the data handling requirements for AI/ML applications – which are growing at perhaps one of the highest rates of all. Just looking at all data generated and consumed in general, the webinar host Tom Spencer, Senior Manager of Product Marketing at Achronix, points to the 294 million emails, 230 million tweets and over a billion searches performed daily. The worldwide totals for stored data have accelerated from 4.4 Zetabytes in 2018 to 44 ZB in 2020 and are expected to grow to 175 ZB by 2025. A Zetabyte is 10^21 bytes.

AI/ML applications are especially burdened because they rely on rapidly growing training sets, network models and data used for inference. According to Tom, there are a number of significant pain points associated with developing hardware for AI/ML. Indeed, the title of the webinar is “How to Overcome the Pain Points of AI/ML Hardware”. Tom artfully narrows down the choice between competing accelerator choices: GPU, FPGA and ASIC. He sees FPGAs as offering the most flexibility. FPGAs provide low latency and can get much more work done in a clock cycle than the alternatives. Also, FPGAs can handle massive data due to their data flow structure.

OK, but what are the pain points? Tom is prepared to talk about the three pain points that must be dealt with to deliver hardware that can handle the task.

Compute power has been a limiting factor in building AI/ML solutions. AI/ML requires trillions of integer and/or floating point operations per second. The data formats needed include fixed and floating from 3 bits to 64, and now often include newer formats such as Block Floating Point (BFP) and bFloat16.

Data has to be able to move on and off chip rapidly, otherwise processing will fall behind. Applications such as autonomous driving need to support high frame rates for high-resolution video. The need to achieve timing closure and build interfaces from scratch adds to the burden.

Similar to external data movement, FPGAs need to have the ability to move data internally to facilitate the data flow in the neural network. AI/ML requires huge amounts of parallel processing elements to store and pass data internally. In many cases there can be resulting timing closure issues or precious FPGA logic resources used up for this task.

Achronix FPGA for AI/ML
Achronix FPGA for AI/ML

The webinar will talk about how the Achronix Speedster7t FPGA family can address each of these pain points, making system design much easier and delivering improved performance. The Speedster7t is available as a stand-alone FPGA device, embeddable FPGA IP or in a packaged solution – such as the VectorPath accelerator card.

Achronix Speedster7t has specific features that work together to enable AI/ML workloads. The webinar will discuss in detail each of them – which I can summarize here. First of all, there is are specialized Machine Learning Processors (MLP) available as resources for AI/ML operations such as MAC. There are over 2500 MLPs per device. Each one has control, arithmetic and storage functions.

Next, the Speedster7t FPGA fabric is built with a 2D Network on Chip (NoC) that handles data transfers from one element to another. Because it is separate from the FPGA fabric elements, valuable resources are not used just to transfer data across the array. The NoC is high speed, with more that 20 Tbps bidirectional throughput in aggregate.

Lastly, moving data on and off chip to external storage is accelerated by high speed GDDR6 and DDR4 interfaces. The GDDR6 support provides 8 controllers with 16 lanes for massive parallelism and flexibility. The DDR4 provides 64b interfaces to 128 GByte of RAM.

Achronix offers comprehensive software support for AI/ML applications with a wide selection of frameworks, neural network models and development systems. They are targeting solutions such as CNNs, RNNs, Transformer Networks and Feed Forward.

This webinar should provide a lot of useful information to developers of AI/ML hardware who are looking for a smoother path to a working product. Achronix has proven that they offer innovation, such as their embeddable FPGA fabric, 2D NoC and highspeed interfaces. The webinar can be viewed on December 16th at 10AM PST. Reserve your spot here.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.