WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 70
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 70
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)
            
Achronix Logo SemiWiki
WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 70
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 70
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)

WEBINAR: The Rise of the DPU

WEBINAR: The Rise of the DPU
by Don Dingee on 04-29-2024 at 6:00 am

The server and enterprise network boundary has seen complexity explode in recent years. What used to be a simple TCP/IP offload task for network interface cards (NICs) is transforming into full-blown network acceleration using a data processing unit (DPU), able to make decisions based on traffic routes, message content, and network context. Parallel data path acceleration on hundreds of millions of packets at speeds reaching 400 Gbps is where Achronix is putting its high-performance FPGAs to work. Recently, Achronix hosted a LinkedIn Live event on “The Rise of the DPU,” bringing together four experienced server and networking industry veterans to discuss DPU trends and field audience questions on architectural concepts.

The Rise of the DPU panel

DPUs add efficient processing while retaining programmability

The event begins by recognizing that industry emphasis is shifting from smartNICs to DPUs. Ron Renwick, Director of Product Marketing at Achronix and host for the event describes the evolution leading to DPUs, where wire speeds increase, offload functionality grows, and ultimately, localized processor cores arrive in the high-speed data path. “Today’s model is the NIC pipeline and processors all embedded into a single FPGA,” he says, with a tightly coupled architecture programmable in a standard environment.

evolution of SmartNICs to DPUs

Renwick also notes that creating a dedicated SoC with similar benefits is possible. However, the cost to develop one chip – and its ability to withstand data path and processing requirement changes that inevitably appear as network features and threats evolve rapidly – make an Achronix FPGA on a DPU a better choice for most situations.

Baron Fung of the Dell’Oro Group agrees, noting that the hyperscale data centers are already moving decisively toward DPUs. His estimates pin market growth at a healthy 25% CAGR, headed for a $6B total in the next five years. Fung shares that hyperscalers using smartNICs still chew up as much as half their CPU cores on network overhead services like security, storage, and software-defined features. Moving to a DPU frees up most, if not all, of the server processing cores, so cloud and data center customers get the processing they’ve paid for.

why use DPUs

Patrick Kennedy of the review site Serve the Home echoes this point, saying that smartNICs need a management CPU complex, while DPUs have processing, memory, storage, and possibly an operating system on board. Kennedy reminds everyone that introducing an OS on a DPU creates another point in a system for security management.

AI reshaping networks with DPUs in real-time

The wildcard in DPU adoption rates may be the fourth bubble in the image above – accelerated computing with AI. Scott Schweitzer, Director of DPU Product Planning at Achronix, says that in any networking application, reducing latency and increasing determinism go hand in hand with increased bandwidth. “Our high-performance 2D network-on-chip operating at 2 GHz allows us to define blocks dynamically on the chip to set up high-speed interconnect between various adapters in a chassis or rack,” he continues. Machine learning cores in an FPGA on the DPU can process those network configuration decisions locally.

Fung emphasizes that AI will help offload the control plane by function. “AI-based DPUs improve resource utilization of accelerated servers and in scalable clusters,” he adds. Using DPUs to connect and share resources may have a strong use case in large GPU-based AI training clusters, helping open the architecture around Ethernet.

Kennedy likes the idea of AI clusters, recognizing that training is a different problem than inference. “Once you have models trained, you now have to be able to serve a lot of users,” he observes. DPUs with Ethernet networks make sense as the user-facing offload that can help secure endpoints, ingest data, and configure the network for optimum performance.

Those are some highlights from the first half of the event. In the second half, the open discussion among the panelists uses audience questions to generate starting points for topics touching on future DPU features and use cases, hyperscaler and telecom adoption, coordinating DPUs with other network appliances, and more. Much of the value of these Achronix events is in these discussions, with unscripted observations from Achronix experts and their guests.

For the entire conversation, watch the recorded webinar:
LinkedIn Live: The Rise of the DPU

Also Read:

WEBINAR: FPGA-Accelerated AI Speech Recognition

Unveiling the Future of Conversational AI: Why You Must Attend This LinkedIn Live Webinar

Scaling LLMs with FPGA acceleration for generative AI

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.