Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/opinions-wanted-on-fpga-coprossors-for-hpc.8006/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Opinions Wanted on FPGA Coprossors for HPC

Arthur Hanson

Well-known member
It's been reported Intel is trying to change the game to compete with Nvidia, any views on where this is going and who might win?
 
My opinion is that Intel's efforts about FPGA's in HPC are already lost battle.

Simply because there is no FPGA on this planet, that can compete ASIC in term of performance/power/price ratio. And it seems that 14nm Stratix 10 will not be different.

So HPC will be divided into ASIC (like Google TPU or latest Sunway TaihuLight) and GPU (in areas, where developing hardware algorithms for FPGA/ASIC will be problem).

Just my opinion. Of course, Intel will gain any market share, as usually, but most of it will be thanks monetary interventions or x86 compatibility daemons.
 
Plus Intel is losing the process battle and their data center business is being challenged on multiple fronts:

Is the Intel Cash Cow in Danger?

I'm not saying Intel is going out of business, far from it, but double digit growth for Intel moving forward seems like the impossible dream.
 
They have to be already done, if they want to release them this year.

They made some changes in datasheets and optimization manuals just few days/weeks ago, this changes was probably based on close to final version of silicon.

But again, i am just guessing. ;)

Btw.: My opinion on Statix 10 is, that they want to reach more than 1 GHz so much, but logic in standard datapaths stays at 400 - 600 MHz. So they are aiming at hyperpipelining too much, what reminds me Pentium 4. My question is, did this higher clock rate means, that they had to compensate lower gate count, or it is performance advantage?
 
Yes it taped out the 2nd half of 2015. I'm told that an Intel team took over and finished it. It really is going to be interesting to see a direct comparison between Altera 14nm and Xilinx 16nm parts. Do you remember this graph?

View attachment 17644
 
because tomorrow can't wait, and because predicting the future is hard

My opinion is that Intel's efforts about FPGA's in HPC are already lost battle.

Simply because there is no FPGA on this planet, that can compete ASIC in term of performance/power/price ratio. And it seems that 14nm Stratix 10 will not be different.

Why is the assumption about HPC? FPGA in servers is about algorithm flexibility. ASICs are good if you have 2 years to go to market and the algorithm is well established and stable. There remains a lot of stuff outside of that where FPGAs compete. Given time, a well understood problem domain, and a big enough market - yes, ASICs will show up eventually and dominate. But there remain many problems too small to pay for an ASIC design, or too rapidly changing to freeze into fixed logic, or too urgent to tolerate a long wait for the market. If you can fit that into an FPGA (perhaps a hybrid of FPGA acceleration and host software) then why not?

I see FPGA as a general purpose part of a data center server, not necessarily specialized to HPCs.
 
Simply because there is no FPGA on this planet, that can compete ASIC in term of performance/power/price ratio. And it seems that 14nm Stratix 10 will not be different.

So HPC will be divided into ASIC (like Google TPU or latest Sunway TaihuLight) and GPU (in areas, where developing hardware algorithms for FPGA/ASIC will be problem).

Although my crystal ball is broken I do see one big advantage for FPGA over ASIC and that is repogrammability so very short time to market. Seems FPGA is already big in data center scale software defined networking. The setups and needs of these networks evolve too fast to allow ASIC implementation and FPGA allow higher throughput @ lower power consumption than CPUs.

I think the more general applicability for FPGA will depend on how the memory bandwidth optimization is tackled and how easy it is for the programmers to write optimized programs making use of it; e.g. the tools will play a big role and they should not look like an average EDA tool but more like a regular IDE (integrated development environment).
 
Last edited by a moderator:
As I said, it is just my opinion.

Today purpose o FPGAs in datacenters or HPC is mainly leading edge networking. But this area is far from Nvidia's interests, which was first idea. And i do not think that it will gona change any soon.

Reconfigurability is often used as argument for FPGA but is it really relevant? Most of algorithms i have seen implemented was already well known decades ago. Limitations here was related to what was possible to implement in hardware.

And even if you need to implement something in FPGA, development here is far from easy or short time to market. It is still hardware development. You mentioned that this can change by replacing EDA with IDE, but this is wrong. It will ends up by using pre-synthetized IP block hidden behind C++ functions automaticky assembled into some kind of soft-core... So it will be GPU. Pretty expensive GPU, i must add.

Which brings me back to main reason, why i believe that FPGAs will not be used in general purpose computing. Price. Imagine you have ~$120k. You can choose between 1x XCVU9P and 4x Tesla P100 for your compute node. And this is why any FPGA will not compete Nvidia's GPU in HPC.

I hope i have explained you, why i have this opinion. And i understand that you can have different view on this topic.
 
I do see some potential for FPGA in select HPC applications. Search, for example, where Google is constantly tweaking and modifying their algorithms - reprogrammability is likely a useful feature there. But outside those niches I don't see FPGA being widely adopted.
 
Back
Top