WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 68
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 68
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)
            
Achronix Logo SemiWiki
WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 68
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 68
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)

Data Orchestration Hardware Unlocks the Full Potential of AI

Data Orchestration Hardware Unlocks the Full Potential of AI
by Mike Gianfagna on 06-10-2021 at 10:00 am

Data Orchestration Hardware Unlocks the Full Potential of AI

We all know that artificial intelligence (AI) and machine learning (ML) are fundamentally changing the world. From the smart devices that gather data to the hyperscale data centers that analyze it, the impact of AI/ML can be felt almost everywhere. It is also well-known that hardware accelerators have opened the door to real-time operation of advanced AI/ML algorithms – a key ingredient for success. What may not be as well-known is this isn’t the end of the story. Dedicated hardware accelerators performing parallel/pipelined operations can stall if they become starved for data. To prevent this, data orchestration is needed, and there are a lot of ways to implement this function. A recent white paper from Achronix provides an excellent overview of how to keep your AI accelerators running at top speed. A link to the white paper is coming, but first let’s examine the challenges of accelerator operation and how data orchestration hardware unlocks the full potential of AI.

What is Data Orchestration?

Data orchestration includes pre- and post-processing operations that ensure data seen by something like a machine learning accelerator arrives at the best time and in the best form for efficient processing. Network and storage delays can contribute to the problem here.  Operations can range from resource management and utilization planning to I/O adaptation, transcoding, conversion and sensor fusion to data compaction and rearrangement within shared memory arrays. This is a complex set of operations. Let’s examine some of the options to implement data orchestration.

Implementation Options

AI algorithms are complex and so there are many tasks that must be handled by the data orchestration function, whether it’s in a data center, an edge-computing environment or a real-time embedded application such as an automated driver assistance system (ADAS). Tasks that need to be handled include:

  • Data manipulation
  • Scheduling and load balancing across multiple vector units
  • Packet inspection to check for data corruption (e.g., caused by a faulty sensor)

One approach is to implement these functions by adding data-control and exception-handling hardware to the core processing array. The variety and complexity of operations needed and the fact that AI models evolve makes a hardwired approach costly and easily obsoleted.  Another approach is a programmable microprocessor to control the flow of data through an accelerator. Here, the latency introduced by software-based execution will cause its own set of performance problems. A programmable logic approach can provide a best fit solution here. This technology allows modifications in the field, avoiding the risk of the data orchestration engine becoming outdated.

The Achronix Approach

The white paper from Achronix provides a lot of valuable information regarding effective implementation of data orchestration hardware.

The piece discusses the use of FPGA and embedded FPGA technology for data orchestration. It points out that not all FPGAs are well-suited to these tasks. For example, typical FPGA architectures are not built as a core element of the datapath, but rather primarily for control-plane support for processors that interact with memory and I/O. Data orchestration requires input, transformation and management of data elements on behalf of processor and accelerator cores which can put significant strain on traditional FPGA architectures.

To address these challenges, Achronix has developed a 2D network on chip, or NoC which allows for data to be sent from the device I/O to the FPGA core and back at 2GHz.  It doesn’t require any FPGA logic resources to perform the routing and overcomes traditional logic congestion from other FPGA architectures.

The required architectural features of FPGAs that can address the requirements of data orchestration are detailed. Datacenter, edge computing and real-time embedded-systems requirements are discussed along with the needs of inferencing algorithms. Specific challenges presented by real-time systems are detailed as well. A wide range of options becomes available for accelerating AI performance with the right FPGA architecture as shown in the figure below.

Data Orchestration Provides a Number of Options for Accelerating AI Functions
Data Orchestration Provides a Number of Options for Accelerating AI Functions

The new Speedster7t FPGA and Speedcore eFPGA IP from Achronix are well-suited to the requirements of data orchestration. The white paper provides substantial detail to back up these claims. SemiWiki covered the new Speedster7t announcement here. And finally, you can get your copy of the Achronix white paper here. If there are AI accelerators in your next design, I highly recommend you check out this white paper to learn how data orchestration hardware unlocks the full potential of AI.

 

 

 

 

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.