WP_Term Object
(
    [term_id] => 16377
    [name] => OpenFive
    [slug] => openfive
    [term_group] => 0
    [term_taxonomy_id] => 16377
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 8
    [filter] => raw
    [cat_ID] => 16377
    [category_count] => 8
    [category_description] => 
    [cat_name] => OpenFive
    [category_nicename] => openfive
    [category_parent] => 386
)
            
OpenFive PNG 1
WP_Term Object
(
    [term_id] => 16377
    [name] => OpenFive
    [slug] => openfive
    [term_group] => 0
    [term_taxonomy_id] => 16377
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 8
    [filter] => raw
    [cat_ID] => 16377
    [category_count] => 8
    [category_description] => 
    [cat_name] => OpenFive
    [category_nicename] => openfive
    [category_parent] => 386
)

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads
by Kalar Rajendiran on 05-17-2021 at 10:00 am

During the week of April 19th, Linley Group held its Spring Processor Conference 2021. The Linley Group has a reputation for convening excellent conferences. And this year’s spring conference was no exception. There were a number of very informative talks from various companies updating the audience on the latest research and development work that is happening in the industry. The presentations had been categorized under eight different subject matters. The subject matters were Edge AI, Embedded SoC Design, Scaling AI Training, AI SoC Design, Network Infrastructure for AI and 5G, Edge AI Software, Signal Processing and Efficient AI Inference.

Artificial Intelligence (AI) as a technology has garnered lot of attention and investment over the recent years. The conference certainly reflected that in the number of subject matter categories relating to AI. Within the broader category of AI, Edge AI was a subject matter that had an unfair share of presentations and justifiably so. Edge computing is seeing rapid growth driven by IoT, 5G and other low-latency requirement applications.

One of the presentations within the Edge AI category was titled “Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads.” The talk was given by Chris Lattner, President, Engineering and Product at SiFive, Inc. Chris made a strong case for why SiFive’s RISC-V vector extensions based solution is a great fit for AI driven applications. The following is my take.

Market Requirements:

As fast as the market for edge computing is growing, the performance and power requirements of these applications are also getting more and more demanding. Many of these applications are AI driven and fall into the category of machine learning (ML) workloads. And AI adoption is pushing processing requirement more toward data manipulation rather than general purpose computing. Deep learning underlies ML models and involves processing large arrays of data. With ML models fast evolving, an ideal solution would be one that optimizes for: performance, power, ease of incorporating emerging ML models and scope of resultant hardware and/or software changes.

RISC-V Vector Advantage:

The original motivation behind the initiative that have given us the RISC-V architecture is experimentation. Experimenting to develop chip designs that yield better performance in the face of expected slowdown of Moore’s law. RISC-V is built upon the idea of being able to tailor-make particular chips where you can choose which instruction set extensions you are using. Vector extensions allow for processing of vectors of any length using functions which process vectors of fixed lengths. Vector processing enables existing software to run without a recompile when hardware is upgraded in the form of more ALUs and other functional units. Significant progress has happened in terms of established hardware base and supporting ecosystem such as compiler technologies.

RISC-V can be optimized for a particular domain or application through custom extensions. As an open standard instruction set architecture, RISC-V users enjoy lot of flexibility in choosing a supplier for their chip design needs.

SiFive’s Offering:

SiFive has enhanced the RISC-V Vector advantage by adding new vector extensions for accelerating execution of many different neural network models. Refer to Figure 1 to see an example of the kind of speedup that can be gained using SiFive’s add-on extensions compared to using just the base vector extensions of RISC-V. Its Intelligence X280 solution is a multi-core capable RISC-V Vector solution (hardware and software) to make it easy for its customers to implement optimized Edge AI applications. The solution can also be used to implement data center applications.

Figure 1:

SuperCharge ML Performance risc-v

 

SiFive Advantage:

  • SiFive’s Intelligence X280 solution fully supports TensorFlow and TensorFlow Lite open-source platforms for machine learning (Refer to Figure 2)
  • SiFive provides an easy way to migrate customer’s existing code based on other architectures to RISC-V Vector architecture. For example, SiFive can translate ARM Neon code to RISC-V V assembly code
  • SiFive allows its customers to explore adding custom extensions to their RISC-V implementations
  • SiFive through its OpenFive business unit extends custom chip implementation services to address domain-specific silicon needs

 

Figure 2:

Full Support TensorFlow Lite risc-v sifive

 

Summary:

In a nutshell, SiFive customers can easily and rapidly implement their applications, whether the applications involve Edge AI workloads or traditional data center type of workloads. If interested in benefitting from SiFive’s solutions for accelerating performance of your ML workloads, I recommend you register and listen to Chris’ entire talk and then discuss with SiFive on ways to leverage their different offerings for developing your products.

 

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.