WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 68
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 68
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)
            
Achronix Logo SemiWiki
WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 68
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 68
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)

FPGA-Based Networking for Datacenters: A Deeper Dive

FPGA-Based Networking for Datacenters: A Deeper Dive
by Bernard Murphy on 08-10-2017 at 7:00 am

I’ve written before about the growing utility of FPGA-based solutions in datacenters, particularly around configurable networking applications. There I just touched on the general idea; Achronix have developed a white-paper to expand on the need in more detail and to explain how a range of possible solutions based on their PCIe Accelerator-6D board can meet that need.


You may know of Achronix for their embedded FPGA IP solution. In fact they launched their business offering, and continue to enjoy success with their Speedster-22i FPGA family of devices optimized for wireline applications. And they now provide a complete board built around a Speedster FPGA for a more turnkey solution ready to plug into datacenter applications.

Why is this important? Cloud services is a very competitive arena, especially between the 800-pound gorillas – AWS (Amazon), Azure (Microsoft), vmWare/Dell and Google. Everyone is looking for an edge and the finish line keeps moving. You can’t do much (at least today) to differentiate in the basic compute engines; Intel (and AMD) has too much of a lead in optimizing those engines. But networking between engines is a different story. Raw speed obviously is an important factor, but so is optimizing virtualization; there’s always more you can do to optimize between blades and racks to provide better multi-cluster performance and reduce power. Then there’s security. Networks are the highways on which malware and denial of service (DOS) attacks spread; correspondingly, they’re also where these attacks can be stopped if sufficiently advanced security techniques can be applied to transmitted data.

So each of these cloud service providers needs differentiated and therefore custom solutions. But the volume of parts they need, while high, is probably not sufficient to justify the NREs demanded by ASIC options. And more to the point, whatever they build this year may need to be improved next year and they year after that … What they need is configurable network interface cards (NICs) but sufficiently configurable that they can’t buy cost-effective options off-the-shelf. That points fairly obviously to the advantages of an FPGA-based solution, which is exactly why Microsoft is using Intel/Altera FPGAs.

A good example of how a custom networking solution can optimize performance over standard NIC solutions is in remote DMA (RDMA) access between CPUs. Think about a typical tree configuration for a conventional local area network (LAN). North-south communication in the LAN, directly from root to leaves or vice-versa, is efficient requiring potentially only a single hop. But east-west traffic, between leaves, is much less efficient since each such transaction requires multiple hops. Standard NICs will always hand processing off to the system controller and these transactions, which will be common in RDMA, create additional system burden and drag down overall performance. This is where a custom solution to handle east-west transaction can offload traffic from the system and deliver higher overall performance.

Security is as much of a moving target as networking performance. Encryption in communication between virtual machines is becoming increasingly common, as are methods to detect intrusion and prevent data loss. These capabilities could be implemented in software running on system host but that burns system cycles which aren’t going to the client application and which can chip away at competitive advantage, or fail to live up to committed service level agreements. Again a better solution in some cases can be to offload this compute into dedicated engines, not only to reduce impact on system load but also to provide a level of development and maintenance independence (and possibly additional security) in separating security functions from the VM system.


Achronix also talk about roles their solution can play in Network Functions Virtualization (NFV), from assistance with test and measurement in software defined networking (SDN) scenarios to providing an offload platform for NFV computation.

This is an exciting and rapidly growing domain. As we offload more and more of our local computation to the cloud, we expect to get higher performance at reasonable cost and, in most cases, with very high security. Effectively managing communication within the cloud is a big part of what will continue to grow cloud appeal; configurable, differentiable solutions based on platforms like the Accelerator-6D will be an important component in delivering that appeal. You can read more about Achronix’ Accelerator-6D solution HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.