WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 71
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 71
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)
            
Achronix Logo SemiWiki
WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 71
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 71
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)

Embedded FPGA’s create new IP category

Embedded FPGA’s create new IP category
by Tom Simon on 07-07-2017 at 12:00 pm

FPGA’s are the new superstar in the world of Machine Learning and Cloud Computing, and with new methods of implementing them in SOC’s there will be even more growth ahead. FPGA’s started out as a cost effective method for implementing logic without having to spin an ASIC or gate array. With the advent of the web and high performance networking hardware, discrete FPGA’s evolved into heavy duty workhorses. The market has also matured and shaken out, leaving two large gorillas and a number of smaller players. However, the growth of AI and the quest for dramatically improved cloud server hardware seems to be the expanding the role of FPGA’s.


At DAC in Austin I came across Achronix a relatively new FPGA company that is experiencing a renaissance. I stopped by the speak to Steve Mensor, their VP of Marketing. There was reason enough to speak with him because of their recent announcement that their YTD revenues for 2017 are already over $100M. This is largely the result of solid growth in their Speedster 22i line of FPGA chips. Achronix originally implemented this line at the debut of Intel’s Custom Foundry on the then state of the art 22nm FinFET node. This gave them the distinction of being the first customer of Intel’s Custom Foundry.


Building on this, Steve was eager to talk about their game changing IP offering of embedded FPGA cores – aptly named Speedcore eFPGA. These are offered as fully customized embedded FPGA cores that can be integrated right into system level SOC’s. To understand why this important, we have to look at a recent research project by Microsoft called Catapult with the goal of significantly boosting search engine performance. Microsoft discovered that there was a big advantage in converting a subset of the search engine software into hardware optimized for the specific compute operation. This advantage is amplified when these compute tasks can be made massively parallel – exactly the kind of thing that FPGA’s are good at. They also studied the same approach for cloud computing with Azure and found performance benefits there too.

The next market factor that starts to make embedded FPGA cores look extremely attractive is Neural Networks. Both training and recognition require massive computing that can be broken into parallel operations. The recognition phase – such as the one running in an autonomous vehicle – can be implemented largely with integer operations. Once again this aligns nicely with FPGA capabilities. So if FPGA’s can boost search engine and AI applications, what are the barriers to implementing them in today’s systems?

If you look at the current marketing materials for Altera and Xilinx you can see that they dedicate a lot of energy to developing and promoting their IO capabilities. Getting data in and out of an FPGA is a critical function. Examining the floor plan for an FPGA chip, you will see a large area used for programmable IO’s. Of course along with the large areas resources used, comes the need for higher power consumption.


Embedding an eFPGA core means that interface lines can be directly connected to the rest of the design. With less area for each signal, wider busses can be implemented. Interfaces can run faster, now that interface SI and timing issues have been reduced with on-chip integration.

The other benefit alluded to earlier is that eFPGA can be configured to achieve optimal performance. The adjustable parameters include the number of LUT’s, embedded memories and DSP blocks. Customers get GDS II that is ready to stitch into their design. The tool chain for Speedcore eFPGA’s can accommodate the custom configurations.


Steve told me that today the largest share of their impressive revenue is standalone chips, but by 2020 he expects 50% of their sales to be embedded. Another application for FPGA’s is use as chiplets for 2.5D designs. But more on that in future writings.

Steve emphasized that designing FPGA’s is pretty tricky. There are power and signal integrity issues that need to be solved due to their massive interconnect. Real improvement only comes over time with years of experience optimizing and tuning architecture. Steve suggested that many small improvements over time have added up to much better results in their FPGA’s.

Right now it looks like Achronix is positioned to break away from the pack of smaller FPGA providers and potentially revolutionize the market. With this new appproach, FPGA’s can be said to have decisively transitioned from their early days of being a glue logic vehicle to a pivotal component of advanced computing and networking applications. For more details on Achronix eFPGA cores take a look at their website.

Share this post via:

Comments

2 Replies to “Embedded FPGA’s create new IP category”

You must register or log in to view/post comments.