We have all seen the announcements to provide ever-increasing network capabilities within the data centers. Enabling these advances are improvements in connectivity including SerDes, PAM4, optical solutions, and many others. It seems 40G is old news now, and the current push is for 400G – things are changing very quickly. These advancements focus on high-speed transmission of data within the data center. What has not been talked about as much is the extra burden that can be added to the processors themselves to manage all this traffic. What would be the point of connecting all of these blazingly fast processors if all their efforts only go towards talking to each other? Into the breach stepped “SmartNICs,” also known as intelligent server adapters (ISAs). These devices can offload a lot of these network-management tasks from the host CPUs. A SmartNIC allows you to use those CPUs for meaningful work, not just networking and housekeeping.
SmartNICs have been in the discussion for several years now, though the name has been used more like a marketing term without a clear definition. One short definition is that a SmartNIC:
- Implement complex server-based functions requiring compute, networking and storage
- Support an adaptable data plane with minimal limitations on functions available;
- Work seamlessly with existing open-source ecosystems.
As I said, this is the “short definition.” This topic is quite intricate. Fortunately, Achronix has now released a white paper titled, How to Design SmartNICs Using FPGAs to Increase Server Compute Capacity. The paper initially discusses the three forms of SmartNICs in use today – Multicore SmartNICs, based on ASICs containing multiple CPU cores; FPGA-based SmartNICs; and FPGA-augmented SmartNICs, which combine hardware-programmable FPGAs with ASIC network controllers.
As you will have noticed from the white paper title, Achronix uses FPGAs in its SmartNIC solution. The reasons for this are the limitations inherent in a multicore SmartNIC design. Multicore SmartNIC designs usually include an ASIC that incorporates many software-programmable microprocessor cores. The cores used may vary, but these solutions are still expected to be limited for two reasons: (a) They are based on software-programmable processors which are slower when used for network processing due to a lack of processor parallelism; and (b) The fixed-function hardware engines in these multicore ASICs lack the data-plane programmability and flexibility that is increasingly required for SmartNIC offloading. Using multiple cores cannot achieve the parallelism gained from using numerous custom pipelines in an FPGA.
There are many combinations and layers of features available when building a SmartNIC with an FPGA. The Achronix white paper goes into these variants in detail. I found it particularly good at describing the architectural modifications to achieve specific features. The white paper focuses on the concept, architecture and implementation of a SmartNIC using FPGAs and is something that anyone with interest in this area should pick up. You will find a long list of white papers, including this one, provided by Achronix on the documentation section of their website. This white paper requires minimal registration information to access.
I felt I learned a lot going through this white paper as it contains so much information and examples. If you care about this topic, you should pick up a copy now.